This is the project which @epiception, @gupta-abhay and I worked on for the Robot Localization and Mapping (16-833) course project at CMU in Spring 2019. The motivation of the project is to see how DeepVO can be enhanced using event-frames. To read more about event cameras and event SLAM, see this link. Our reports and presentations can be found in the reports
folder.
This is a PyTorch
implementation. This code has been tested on PyTorch 0.4.1
with CUDA 9.1
and CUDNN 7.1.2
.
We have a dependency on a few packages, especially tqdm
, scikit-image
, tensorbordX
and matplotlib
. They can be installed using standard pip as below:
pip install scipy scikit-image matplotlib tqdm tensorboardX
To replicate the python
environment that we used for the experimentation you can start a python virtual environment (instructions here) and then run
pip install -r requirements.txt
The model assumes that FlowNet pre-trained weights are available for training. You can download the weights from @ClementPinard's implementation. Particularly, we need the weights for FlowNetS (flownets_EPE1.951.pth.tar). Instructions for downloadind the weights are in the README given there.
This model assumes the MVSEC datasets available from the Daniilidis Group at University of Pennsylvania. The code to sync the dataframes for event and intensity frames along with poses are available in the data folder.
Place the pre-processed data into each folder before running the models or change the args.py
in each folder to accept data from a comman folder.
To run the code for base DeepVO results - without any fusion, from the base directory of the repository, run:
cd finetune/
sh exec.sh
To run the fusion model, from the base directory of the repository, run:
cd finetune/
sh exec_fusion.sh
Similarly models for the ablation experiments like scratch and freeze can be run by going to the respective folders.
This code has been largely modified from DeepVO implementation by krrish94