Skip to content

✏️ Web-based image segmentation tool for object detection, localization and keypoints

License

Notifications You must be signed in to change notification settings

Agricultural-Robotics-Bonn/coco-annotator

 
 

Repository files navigation

AgRobot COCO-Annotator

Bonn Agricultural Robotics Annotation tool forked from jsbroks' COCO-Annotator.

This adds multiple features to make agricultural related annotation easier.

AgRobot-COCO-Annotator additional features

  • Most common agricultural related image filters (ExG, ExGExR, CIVE)
  • PyTorch support
  • Run PyTorch Mask-RCNN on images to produce annotations
  • Configurable auto-backup of the annotation database and simple recovery tools
  • Default high resolution annotation polygons

COCO-Annotator original features

  • Directly export to COCO format
  • Segmentation of objects
  • Ability to add key points
  • Useful API endpoints to analyze data
  • Import datasets already annotated in COCO format
  • Annotate disconnect objects as a single instance
  • Labeling image segments with any number of labels simultaneously
  • Allow custom metadata for each instance or object
  • Advanced selection tools such as, DEXTR, MaskRCNN and Magic Wand
  • Annotate images with semi-trained models
  • Generate datasets using google images
  • User authentication system

For examples and more information check out the wiki.

Install & Run server with PyTorch and CUDA support

In order to run PyTorch models on GPU, nvidia docker runitime utilities must be installed and configured on the server machine before running the annotation server.

Run install script

[path_to_repo]/scripts/install.sh

This script will:

  • Install nvidia docker runitime utilities
  • Build a python3 environment with torch and cuDNN support as a docker container
  • Generate deploy keys and save them in /scripts/keys, which need to be added to any repositories you want to use (for now only Agricultural-Robotics-Bonn/agrobot-pytorch-mask-rcnn)

Run server with PyTorch and CUDA support

After installing and adding the depoly ssh-key to your repos, run the server with the following commands:

cd [path_to_annotator_repo]
sudo docker-compose -f docker-compose.torch_build.yml up --build

Database Auto-Backup configuration and recovery

Database auto-backup only works if the servers database is running a replica set.

The replica set and backup scheme are configured in the server's docker-compose file.

Files supporting this are:

  • docker-compose.build.yml
  • docker-compose.torch_build.yml.

Auto-Backup configuration

To change the auto-backup settings, edit the backup service entry on the docker-compose file you intend to run.

The most relevant settings you can change are:

  • Backup path:
  volumes:
    - [server_backup_path_here]:/backup
  environment:
    - CRON_TIME=[crontab_backup_frequency_here]

Database Backup recovery

List the names of available backup files:

ls [server_backup_path_here]

With the annotation server running, run the following command on the server machine to restore the derired backup:

docker exec annotator_backup /restore.sh /backup/database-[backup_timestamp].archive.gz

TODOs:

  • Automate Agrobot-MaskRCNN download when building docker images.
  • Remove bbox used by box based detectors from instance list (e.g.: torchbox)
  • Agrobot-MaskRCNN has poor performance when used with torchbox in small bounding boxes.
  • Increase DEXTR detection mask's resolution (if possible)
  • Fix: Sometimes the annotator becomes slow and crashes when using the eraser (modifies the polygons in real time). When that image is re-opened, all instance masks are missing but are still listed on the right pannel. If the image gets saved, all annotations for that Image get lost. related terminal output:
annotator_message_q | 2021-01-11 10:07:06.578 [error] <0.29064.2> closing AMQP connection <0.29064.2> (172.23.0.7:46456 -> 172.23.0.2:5672):
annotator_message_q | missed heartbeats from client, timeout: 60s
  • Re-Projection based label propagation

More info in the original repo:

FeaturesWikiGetting StartedIssuesLicense


COCO Annotator is a web-based image annotation tool designed for versatility and efficiently label images to create training data for image localization and object detection. It provides many distinct features including the ability to label an image segment (or part of a segment), track object instances, labeling objects with disconnected visible parts, efficiently storing and export annotations in the well-known COCO format. The annotation process is delivered through an intuitive and customizable interface and provides many tools for creating accurate datasets.


Image annotations using COCO Annotator

Checkout the video for a basic guide on installing and using COCO Annotator.


Note: This video is from v0.1.0 and many new features have been added.

Built With

Thanks to all these wonderful libaries/frameworks:

Backend

  • Flask - Python web microframework
  • MongoDB - Cross-platform document-oriented database
  • MongoEngine - Python object data mapper for MongoDB

Frontend

  • Vue - JavaScript framework for building user interfaces
  • Axios - Promise based HTTP client
  • PaperJS - HTML canvas vector graphics library
  • Bootstrap - Frontend component library

License

MIT

Citation

  @MISC{cocoannotator,
    author = {Justin Brooks},
    title = {{COCO Annotator}},
    howpublished = "\url{https://github.com/jsbroks/coco-annotator/}",
    year = {2019},
  }

About

✏️ Web-based image segmentation tool for object detection, localization and keypoints

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Vue 58.0%
  • Python 28.0%
  • JavaScript 11.6%
  • Shell 1.1%
  • CSS 0.7%
  • Dockerfile 0.5%
  • HTML 0.1%