This is an implementation of some Domain Randomization tools within the ROS+Gazebo framework, following the work of Tobin et al. "Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real Worl".
It can be used to generate virtual datasets for an object recognition task of your choice, as it will automatically generate the bounding boxes for the object we seek to recognize in every generated pictures. The object has to be rendered in a .dae file compatible with Gazebo, first.
This has been developped and tested using ROS Indigo and Gazebo v2.2.6.
Once the object you seek to recognize in real applications has been modeled in 3D, in a dae file, you need to place that file as follows : ./models/GazeboDomainRandom/models/robot/mesh.dae.
You will now need to install all the required models so that ROS and Gazebo will be able to find them.
To install the model dependencies for ROS and Gazebo to find them, you need to execute :
./install_models.sh
It will install the models into the hidden folder .gazebo of your home directory.
In order to generate a dataset, execute :
python createDataset_auto.py
The following command keys apply :
- n : create a new scene with random objects and colors. You will need to wait 5 seconds for the unloading and loading of the objects.
- c : change the orientation of the objects that are currently in the scene.
- v : change the position of the objects that are currently in the scene.
- p : save the current picture in the ./src/dataset_test/images/ folder and create an XML annotation file with the bounding boxes for the object we seek to recognize in the ./src/dataset_test/annotations/ folder.
This work is still a work in progress with many flaws and every contributions and/or advices are welcome :).