Assuming you have conda installed, let's prepare a conda env:
# We require python>=3.7 and cmake>=3.10
conda create -n habitat python=3.7 cmake=3.14.0
conda activate habitat
- To install habitat-sim with bullet physics
conda install habitat-sim withbullet headless -c conda-forge -c aihabitat
git clone https://github.com/sashankmodali/LearningToNavigate.git
cd LearningToNavigate
Install Habitat-lab using following commands:
git clone https://github.com/sashankmodali/habitat-lab.git
cd habitat-lab
pip install -r requirements.txt
python setup.py develop --all # install habitat and habitat_baselines
Dataset can be downloaded from here
Gibson Scene datasets can be downloaded from here
Object datasets can be downloaded from here
Folder structure should be as follows:
LearningToNavigate/
habitat-lab/
data/
scene_datasets/
gibson/
Adrian.glb
Adrian.navmesh
...
datasets/
pointnav/
gibson/
v1/
train/
val/
...
object_datasets/
banana.glb
...
cd habitat-lab
ln -s ../data data
cd ..
- For Milestone 1, run the following:
. milestone1.sh
OR
conda activate habitat
python main.py --print_images 1
- To generate training data and train depth1, run the following:
. generate_training_data.sh
python train_depth1.py
OR
conda activate habitat
python main.py --print_images 1 -d ./training_data/ -el 10000 --task generate_train
python train_depth1.py
- For Milestone 2 , run the following:
. milestone2.sh
OR
conda activate habitat
python nslam.py --split val --eval 1 --train_global 0 --train_local 0 --train_slam 0 --load_global pretrained_models/model_best.global --load_local pretrained_models/model_best.local --load_slam pretrained_models/model_best.slam -n 1 --print_images 1
python generate_video.py
- For Final Evaluations , run the following:
. eval_ppo_st.sh
AND
. eval_ppo.sh
AND
. eval_ans.sh
Then, the results can be obtained in /tmp/dump/[experiment]/episodes/1/1/
To generate video, run,
python generate_video.py
Then, the results can be obtained in /tmp/dump/[experiment]/episodes/1/1video.avi