Our framework allows a seamless integration of visual SLAM with a simulator (MINOS in this case), where the agent moves within the simulator, and in parallel visual SLAM (ORB-SLAM in this case) is run on the front view camera of the agent.
Our.DQN2.mp4
The framework can be used for training/testing of SLAM in an active setting, where the agent is required to evaluate the effects (e.g. the safety with respect to tracking) of current step before planning the next motion. Please refer to DQN-SLAM to see our framework being used in active setting for reliable tracking.
Run following commands in terminal to install pre-requisite libraries.
sudo apt-get install python3.5-dev && sudo apt-get install python3-tk && sudo apt-get install build-essential libxi-dev libglu1-mesa-dev libglew-dev libopencv-dev libvips && sudo apt install git && sudo apt install curl && libboost-all-dev
http://wiki.ros.org/kinetic/Installation/Ubuntu
-
At home type: git clone “https://github.com/raulmur/ORB_SLAM”
-
After dwnload is complete, build third party packages, build g2o.
Go into Thirdparty/g2o/ and execute:
sh
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
make
- Build DBoW2. Go into Thirdparty/DBoW2/ and run:
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
make
- A few changes need to be done before building ORB_SLAM. After compiling thirdparty g2o and DBoW2, before building ORB_SLAM
- In src/ORBextractor.cc include OpenCV library: #include <opencv2/opencv.hpp>
- Remove opencv2 dependency from manifest.xml
- In CmakeList.txt, add lboost as target link library which can be done by replacing
target_link_libraries(${PROJECT_NAME}
${OpenCV_LIBS}
${EIGEN3_LIBS}
${PROJECT_SOURCE_DIR}/Thirdparty/DBoW2/lib/libDBoW2.so
${PROJECT_SOURCE_DIR}/Thirdparty/g2o/lib/libg2o.so
)
```
with
target_link_libraries(${PROJECT_NAME}
${OpenCV_LIBS}
${EIGEN3_LIBS}
${PROJECT_SOURCE_DIR}/Thirdparty/DBoW2/lib/libDBoW2.so
${PROJECT_SOURCE_DIR}/Thirdparty/g2o/lib/libg2o.so-lboost_system
)
```
-
Install eigen form here: https://launchpad.net/ubuntu/trusty/amd64/libeigen3-dev/3.2.0-8
Download the debian file and install using\ sudo dpkg -i libeigen3-dev_3.2.0-8_all.deb
-
Before building ORB-SLAM run this in terminal, (change PC name accordingly)
export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:/home/romi
then run:
roslaunch ORB_SLAM ExampleGroovyOrNewer.launch
In the file … ROB_SLAM/SRC/TRACKING.CC, on line 163 use following line:
ros::Subscriber sub = nodeHandler.subscribe("/usb_cam/image_raw", 1, &Tracking::GrabImage, this);
-
git clone: https://github.com/minosworld/minos
-
Download the v0.7x version of the repository
Now go to the minos folder, press Ctrl+h and copy the .git folder. Paste this .git folder to the minos0.7x folder. Delete the minos folder and change the name of minos0.7x folder to minos. -
Open the minos folder and open terminal on it.
-
Install node.js using the Node Version Manager (nvm)
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.7/install.sh | bash
source ~/.bashrc
nvm install 8.11.3
nvm alias default 8.11.3OR
nvm install v10.13.0
nvm alias default 10.13.0 -
Build the MINOS server modules inside the server directory by:
npm install -g yarn
yarn installOR (not recommended)
npm install
This process will download and compile all server module dependencies and might take a few minutes.
-
Install the minos Python module by running pip3 install -e in the root of the repository or pip3 install -e . -r requirements.txt
-
Before running MINOS, copy the Materport3D and SUNCG datasets in the work folder. Check that everything works by running the interactive client through \
- python3 -m minos.tools.pygame_client
- python3 -m minos.tools.pygame_client --dataset mp3d --scene_ids 17DRP5sb8fy --env_config pointgoal_mp3d_s –save_png --depth\
invoked from the root of the MINOS repository. You should see a live view which you can control with the W/A/S/D keys and the arrow keys. This client can be configured through various command line arguments. Run with the --help argument for an overview and try some of these other examples.
For merging purpose, we need to save simulator frames in a folder of our choice. Go to minos/minos/lib/Simulator.py and make the following changes:\
-
Replace lines 98-102 with (with your own folder address)
if 'logdir' in params: self._logdir = '/home/romi/frames' else: self._logdir = '/home/romi/frames'
-
Replace lines 422-428 with (with your own folder address)
if self.params.get(‘save_png’): if image is None: image = Image.frombytes(mode,(data.shape[0], data.shape[1]),data) time.sleep(0.06) image.save(‘/home/romi/frames/color_.jpg’)
-
Create ROS Node (merger.cpp provider above)
This ROS node is important for communication between MINOS and ORB-SLAM. Simply paste the attached ROS node (merger.cpp) in catkin_ws/src and do necessary changes to CmakeList.txt by uncommenting the following:add_compile_options(-std=c++11) add_executable(${PROJECT_NAME}_node src/merger.cpp) target_link_libraries(${PROJECT_NAME}_node ${catkin_LIBRARIES} )
and adding the following
find_package(catkin REQUIRED COMPONENTS roscpp rospy std_msgs image_transport std_msgs cv_bridge )
-
Finally compile by catkin_make and test by running following command:
rosrun merger merger_node
-
Create a bash file to start the two systems (MINOS+ORB-SLAM) simultaneously (make adjustments accordingly):
/usr/bin/gedit ~/.bashrc
\#!/bin/bash cat /dev/null > abc.txt cat /dev/null > abc2.txt rm -ff /home/romi/frames/* rm -ff /home/romi/ORB_SLAM/bin/TrackLost.jpg sourm -ff /home/romi/ORB_SLAM/bin/Tracking.jpg clear echo "Deleting Images" rm -ff /home/romi/frames/* echo "deleted Images" echo "Starting ROSCORE" roscore & sleep 2 echo "Started ROSCORE" cd /home/romi/minos #Following command runs MINOS python3 -m minos.tools.ans --dataset mp3d --scene_ids JeFG25nYj2p --env_config pointgoal_mp3d_s --save_png --width 600 --height 400 --agent_config agent_gridworld -s map --navmap & cd /home/romi/ORB_SLAM #Following command runs ORB_SLAM after a 10 second delay sleep 5 export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:/home/romi roslaunch ORB_SLAM ExampleGroovyOrNewer.launch & sleep 5 #Following command runs the integration algorithm after a 10 second delay cd /home/romi/catkin_ws source devel/setup.bash rosrun merger merger_node rosrun merger_node merger_node
In order to obtain features quickly, we need to make sure that the Camera Calibration parameters are set according to MINOS. Using the calibration method explained in our paper, we have calibrated MINOS front view camera, for Matterport3D indoor scenes (parameters provided below). These settings must be set in /ORB_SLAM/Data/Settings.yaml
:
%YAML:1.0
# Camera Parameters. Adjust them!
# Camera calibration parameters (OpenCV)
Camera.fx: 890.246939
Camera.fy: 889.082597
Camera.cx: 378.899791
Camera.cy: 210.334985
# Camera distortion parameters (OpenCV) --
Camera.k1: 0.224181
Camera.k2: -1.149847
Camera.p1: 0.007295
Camera.p2: 0.0
# Camera frames per second
Camera.fps: 10
# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1
GTA_Simulator is a realtime simulator that uses GTA-V as the source and has the option to add three levels of motion blur to the source. It extracts images and 6D poses against those images from the GTA-V game using the ScriptHookV library. It is based on G2D: from GTA to Data by Anh-Dzung Doan et. al. However in contrast to that work we propose a realtime data extraction algorithm. After extraction, these images and 6D poses are then saved in a network shared folder for another PC to access and use for mapping etc. This System uses two PC's with GPUs, one a windows 10 PC that runs the GTA-V game and the data extraction algorithm and one a Ubuntu 16.04 PC that uses that data as a feed to a mapping algorithm like ORB-SLAM etc. with or without added motion blur.
note: two platform explanantion The following are the pre-requisites that need to fulfilled:
1) Windows 10 PC
- GTA-V
- Microsoft Visual Studio 2017 or above
- Working G2D source code
- Network shared folder between Windows 10 and Ubuntu 16.04 PCs
2) Ubuntu 16.04 PC
- Network shared folder between Ubuntu 16.04 and Windows 10 PCs
- ROS Kinetic Kame
- Catkin Workspace
- ORB-SLAM
We'll show you how to create a network shared folder for use with this simulator:
Ubuntu 16.04 PC
- First create a folder named 'gta_live' in the home directory
- Inside it create another folder named 'images'. This is where we will save the extracted images.
- Then create an empty text file named pose.txt inside 'gta_live folder but outside 'images' folder. Make sure to alter its permissions so that others can also read and write. This way the Windows 10 PC can also write to this file. This is where we will save the poses of the extracted images.
- Then right click on 'gta_live' folder
- Open 'properties'
- Click on 'Local Network Share' tab
- Tick the 'Share this folder' box
- Also tick both the 'Allow others to create and delete files in this folder' and 'Guest access (for people without a user account)' boxes
- Then finally click on 'Create Share' button
Now the folder is shared, we just need to access it from the Windows 10 PC.
Windows 10 PC
- Double click on 'This PC'
- Right click on empty space and select the option 'Add a network location'.
- A window will open up. Click on 'Next'
- Then choose the 'Choose a custom network location' option or click on 'Next'
-
Here you enter the ip address and folder name in the dialog box as such:
\\*ip address of Ubuntu 16.04 PC*\gta_live
- Click 'Next'
- Next you can select a custom name for this network location according to your preferences. This will not affect how the location will be accesssed. You will still have to enter the location in the script as above.
- Now the window will show a success message once the location is created. Click 'Finish'
- You can now see your newly added network shared folder on 'This PC' under 'Network Locations'
Once all the pre-requisites have been fulfilled we can begin to setup the simulator:
Windows 10 PC
- Goto the Trajectory Tool folder of G2D and replace script.cpp and script.h file with ours from GTA-Scripts folder.
- Open our script.cpp file in visual studio and replace the ip address in line 29 and 30 with that of your Ubuntu 16.04 PC that has the network shared folder.
- Build the solution then take the G2D-Trajectory.asi file from bin folder and add it to the GTA-V directory.
You can also adjust the script.cpp file to get camera pose in quaternions instead of pitch, roll and yaw in degrees. Simply uncomment lines 134 and 142 and subsequently comment line 143.
Ubuntu 16.04 PC
- Create standard individual packages for Blur-GTA, Blur-Merger and Merger scripts with those names in catkin_ws.
- Remember to change the input and output directories in the scripts to your specific directories. Also remember to make the python scripts executable. Build the packages.
- Update the camera parameters for ORB-SLAM using the camera intrinsic parameters provided in the repository.
Once all necessary scripts and packages have been installed we can get started on using the GTA_Simulator:
Windows 10 PC
- Run GTA-V. Once the game is up and running you can use the Condition Tool of G2D to adjust the conditions like time and weather etc. You can also use the Native trainer of ScriptHookV to adjust the time and weather conditions of the game. However to adjust the pedestrian and vehicle density you must use the Condition tool of G2D.
- To start the extraction of images and poses from GTA-V, press the F5 key. A notification will let you know when the extraction starts.
- To end the extraction press the END key. Again a notification will let you know when extraction stops.
- You can find the images and 6D pose file in the network shared folder. The images will be in the images folder and the poses in the pose.txt file.
Ubuntu 16.04 PC
- You can use the bash script provided to automatically execute the other scripts after adjusting it based on how much motion blur is needed, if at all.
- This script will run ORB-SLAM and the blur nodes if required. The mapping will start.
Be sure to wait a few seconds for ORB-SLAM to setup before starting the extraction of images and 6D poses.
The pose file contains the 6D poses in the following format:
With Quaternions:
# pathtoimage tx ty tz qx qy qz qw
With Pitch, Roll and Yaw in Degrees:
# pathtoimage tx ty tz p r y
This work and any data acquired from this work is for educational and research purposes only. Any commercial use is strictly prohibited.
Also as a courtesy to Rockstar Games please purchase the Grand Theft Auto V game.
If you have any queries or questions please feel free to contact me at: ([email protected])
Contributions (bug reports, bug fixes, improvements, etc.) are very welcome and should be submitted in the form of new issues and/or pull requests on GitHub.
We would like to thank Alexander Blade, the original author of the ScriptHookV library and all the contributors of the ScriptHookV library.
Also we would like to thank Anh-Dzung Doan et. al. for their G2D code that gave us the base for this work.