Skip to content

Some key implementation steps of "Research on the visual robotic grasping strategy in cluttered scenes"

Notifications You must be signed in to change notification settings

GeJunyan/Fusion-Mask-RCNN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Fusion-Mask-RCNN

This repository procides some key implementation steps of "Research on the visual robotic grasping strategy in cluttered scenes".

Dataset preparation

  1. "dataset_generation.py" is to generate synthetic dataset for training via BlenderProc. You should prepare you models and background textures to render in advance. The generated dataset includes RGB images, depth images, and mask annotations. For more detailed information, please see https://github.com/DLR-RM/BlenderProc.

  2. To generate HHA images, please see https://github.com/charlesCXK/Depth2HHA-python.

Implementations of Fusion-Mask-RCNN

  1. Fusion-Mask-RCNN in implemented based on Mask-RCNN. https://github.com/facebookresearch/maskrcnn-benchmark.
  2. To prevent excessive invalid information, we only upload "./data", "./demo", "./modeling". We modified codes in these three folders, and you can find our implementations of Fusion-Mask-RCNN in folders.

About

Some key implementation steps of "Research on the visual robotic grasping strategy in cluttered scenes"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages