Skip to content

Commit

Permalink
Initial commit
Browse files Browse the repository at this point in the history
  • Loading branch information
aleflabo committed Aug 1, 2024
0 parents commit 5d33473
Show file tree
Hide file tree
Showing 145 changed files with 100,843 additions and 0 deletions.
30 changes: 30 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# editors
.idea/
.vscode/

# Python
*.egg-info/
.pytest_cache/
__pycache__
.cache/
venv*/
*/__pycache__/*
*.egg*

# data
data/
crowd_nav/data/
videos/
*.log
crowd_nav/log.txt
build/
devel/
.catkin_workspace

crowd_nav/wandb/
crowd_nav/Python-RVO2/
crowd_nav/log.txt

crowd_nav/wandb/
crowd_nav/Python-RVO2/
crowd_nav/log.txt
3 changes: 3 additions & 0 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[submodule "safeCrowdNav"]
path = safeCrowdNav
url = https://github.com/Janet-xujing-1216/SafeCrowdNav.git
21 changes: 21 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2023 dmartinezbaselga

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
91 changes: 91 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@


# Hyp²Nav: Hyperbolic Planning and Curiosity for Crowd Navigation (IROS 2024)

_Guido D'Amely*, Alessandro Flaborea*, Pascal Mettes, Fabio Galasso_


The official PyTorch implementation of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2024 paper [**Hyp²Nav: Hyperbolic Planning and Curiosity for Crowd Navigation**](https://arxiv.org/abs/2407.13567).


[![Watch the video](video/iros_video.gif)](video/iros_video.mp4)

## Abstract
Autonomous robots are increasingly becoming a strong fixture in social environments. Effective crowd navigation requires not only safe yet fast planning, but should also enable interpretability and computational efficiency for working in real-time on embedded devices. In this work, we advocate for hyperbolic learning to enable crowd navigation and we introduce Hyp2Nav. Different from conventional reinforcement learning-based crowd navigation methods, Hyp2Nav leverages the intrinsic properties of hyperbolic geometry to better encode the hierarchical nature of decision-making processes in navigation tasks. We propose a hyperbolic policy model and a hyperbolic curiosity module that results in effective social navigation, best success rates, and returns across multiple simulation settings, using up to 6 times fewer parameters than competitor state-of-the-art models. With our approach, it becomes even possible to obtain policies that work in 2-dimensional embedding spaces, opening up new possibilities for low-resource crowd navigation and model interpretability. Insightfully, the internal hyperbolic representation of Hyp2Nav correlates with how much attention the robot pays to the surrounding crowds, e.g. due to multiple people occluding its pathway or to a few of them showing colliding plans, rather than to its own planned route.

## Setup
1. Install [Python-RVO2](https://github.com/sybrenstuvel/Python-RVO2) library
### added by aleflabo
- clone Python-RVO2 and cd in the repo
- `sudo apt-get install cmake`
- `cmake build .`
- if this step does not work:
- `pip install cmake`
- `pip install -r requirements.txt`
- for python>=3.7: `pip install Cython`
- python setup.py build
- if this step does not work:
- `cd ..`
- `rm -r Python-RVO2`
- `git clone [email protected]:sybrenstuvel/Python-RVO2.git` (the Python-RVO2 repo)
- python setup.py install
2. Install [socialforce](https://github.com/ChanganVR/socialforce) library
### added by aleflabo
- pip install 'socialforce[test,plot]'
3. Install crowd_sim and crowd_nav into pip
```
pip install -e .
```

## Getting Started
This repository are organized in two parts: crowd_sim/ folder contains the simulation environment and crowd_nav/ folder contains codes for training and testing the policies. Details of the simulation framework can be found [here](crowd_sim/README.md). Below are the instructions for training and testing policies, and they should be executed
inside the crowd_nav/ folder.

1. Train a policy.
```
python train.py --policy tree-search-rl --output_dir data/tsrl_random_encoder/ --config configs/icra_benchmark/ts_separate_random_encoder.py
```
with ICM and wandb:
```
python train.py --policy tree-search-rl --output_dir data/tsrl_random_encoder/ --config configs/icra_benchmark/ts_separate_curiosity.py --gpu --wandb_mode online --wandb_name ICM
```
```
python train.py --policy tree-search-rl --output_dir data/HyperVnet_HHICM_embDim=from32to2/ --config configs/icra_benchmark/ts_HVNet_Hypercuriosity.py --gpu --wandb_mode online --wandb_name HyperVnet_HHICM_embDim=from32to2 --embedding_dimension=2
```
2. Test policies with 1000 test cases.
```
python test.py --model_dir data/ICM_reproducing/
```
3. Run policy for one episode and visualize the result.
```
python test.py --policy tree-search-rl --model_dir data/tsrl_random_encoder/ --phase test --visualize --test_case 0
```
```
python test.py --model_dir data/360_HyperVnet_HHICM_embDim=from32to2_human10 --hyperbolic --embedding_dimension 64 --gpu --device cuda:0
```
```
python test.py --policy tree-search-rl --model_dir data/360_HyperVnet_HHICM_embDim=from32to2_human10 --phase test --visualize --test_case 50 --hyperbolic --video_file /home/aflabor/HypeRL/crowd_nav/data/360_HyperVnet_HHICM_embDim=from32to2/video_model_HHICM_embDIm=2_test_50.gif --embedding_dimension 2
```
visualization
```
python test.py --policy tree-search-rl --model_dir data/360_HyperVnet_HHICM_embDim=from32to2_human10 --phase test --visualize --test_case 35 --hyperbolic --video_file /home/aleflabo/amsterdam/intrinsic-rewards-navigation/crowd_nav/data/360_HyperVnet_HHICM_embDim=from32to2_human10/video_submission/video_model_HHICM_embDIm=2_test_ --embedding_dimension 2 --human_num 10
```

Note that in **run_experiments_icra.sh**, some examples of how to train different policies with several exploration algorithms. In **configs/icra_benchmark/**, all the configurations used for testing are shown.



## Acknowledge
This work is based on [CrowdNav](https://github.com/vita-epfl/CrowdNav) and [RelationalGraphLearning](https://github.com/ChanganVR/RelationalGraphLearning) and [SG-D3QN](https://github.com/nubot-nudt/SG-D3QN) and [SG-D3QN-intrinsic](https://github.com/dmartinezbaselga/intrinsic-rewards-navigation). The authors are thankful for their works and for making them available.

## Citation
If you use this work in your own research or wish to refer to the paper's results, please use the following BibTeX entries.
```bibtex
@inproceedings{damely24hyp2nav,
author = {Di Melendugno, Guido Maria D'Amely and Flaborea, Alessandro and Mettes, Pascal and Galasso, Fabio},
booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
title = {Hyp2Nav: Hyperbolic Planning and Curiosity for Crowd Navigation},
year = {2024},
url = {https://arxiv.org/abs/2407.13567}
}
Binary file added crowd_nav/Figure_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
98 changes: 98 additions & 0 deletions crowd_nav/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@


# Improving robot navigation in crowded environments using intrinsic rewards (ICRA 2023)

# [Paper](https://arxiv.org/abs/2302.06554) || [Video](https://youtu.be/Ksbok2YM9YY)

## Poster
<img src="doc/icra2023posterA0.png" width="1000" />

## Abstract
Autonomous navigation in crowded environments is an open problem with many applications, essential for the coexistence of robots and humans in the smart cities of the future. In recent years, deep reinforcement learning approaches have proven to outperform model-based algorithms. Nevertheless, even though the results provided are promising, the works are not able to take advantage of the capabilities that their models offer. They usually get trapped in local optima in the training process, that prevent them from learning the optimal policy. They are not able to visit and interact with every possible state appropriately, such as with the states near the goal or near the dynamic obstacles. In this work, we propose using intrinsic rewards to balance between exploration and exploitation and explore depending on the uncertainty of the states instead of on the time the agent has been trained, encouraging the agent to get more curious about unknown states. We explain the benefits of the approach and compare it with other exploration algorithms that may be used for crowd navigation. Many simulation experiments are performed modifying several algorithms of the state-of-the-art, showing that the use of intrinsic rewards makes the robot learn faster and reach higher rewards and success rates (fewer collisions) in shorter navigation times, outperforming the state-of-the-art.

## Setup
1. Install [Python-RVO2](https://github.com/sybrenstuvel/Python-RVO2) library
### added by aleflabo
- clone Python-RVO2 and cd in the repo
- `sudo apt-get install cmake`
- `cmake build .`
- if this step does not work:
- `pip install cmake`
- `pip install -r requirements.txt`
- for python>=3.7: `pip install Cython`
- python setup.py build
- if this step does not work:
- `cd ..`
- `rm -r Python-RVO2`
- `git clone [email protected]:sybrenstuvel/Python-RVO2.git` (the Python-RVO2 repo)
- python setup.py install
2. Install [socialforce](https://github.com/ChanganVR/socialforce) library
### added by aleflabo
- pip install 'socialforce[test,plot]'
3. Install crowd_sim and crowd_nav into pip
```
pip install -e .
```

## Getting Started
This repository are organized in two parts: crowd_sim/ folder contains the simulation environment and crowd_nav/ folder contains codes for training and testing the policies. Details of the simulation framework can be found [here](crowd_sim/README.md). Below are the instructions for training and testing policies, and they should be executed
inside the crowd_nav/ folder.

1. Train a policy.
```
python train.py --policy tree-search-rl --output_dir data/tsrl_random_encoder/ --config configs/icra_benchmark/ts_separate_random_encoder.py
```
with ICM and wandb:
```
python train.py --policy tree-search-rl --output_dir data/tsrl_random_encoder/ --config configs/icra_benchmark/ts_separate_curiosity.py --gpu --wandb_mode online --wandb_name ICM
```
```
python train.py --policy tree-search-rl --output_dir data/HyperVnet_HHICM_embDim=from32to2/ --config configs/icra_benchmark/ts_HVNet_Hypercuriosity.py --gpu --wandb_mode online --wandb_name HyperVnet_HHICM_embDim=from32to2 --embedding_dimension=2
```
2. Test policies with 1000 test cases.
```
python test.py --model_dir data/ICM_reproducing/
```
3. Run policy for one episode and visualize the result.
```
python test.py --policy tree-search-rl --model_dir data/tsrl_random_encoder/ --phase test --visualize --test_case 0
```
```
python test.py --model_dir data/360_HyperVnet_HHICM_embDim=from32to2_human10 --hyperbolic --embedding_dimension 64 --gpu --device cuda:0
```
```
python test.py --policy tree-search-rl --model_dir data/360_HyperVnet_HHICM_embDim=from32to2_human10 --phase test --visualize --test_case 50 --hyperbolic --video_file /home/aflabor/HypeRL/crowd_nav/data/360_HyperVnet_HHICM_embDim=from32to2/video_model_HHICM_embDIm=2_test_50.gif --embedding_dimension 2
```
visualization
```
python test.py --policy tree-search-rl --model_dir data/360_HyperVnet_HHICM_embDim=from32to2_human10 --phase test --visualize --test_case 35 --hyperbolic --video_file /home/aleflabo/amsterdam/intrinsic-rewards-navigation/crowd_nav/data/360_HyperVnet_HHICM_embDim=from32to2_human10/video_submission/video_model_HHICM_embDIm=2_test_ --embedding_dimension 2 --human_num 10
```

Note that in **run_experiments_icra.sh**, some examples of how to train different policies with several exploration algorithms. In **configs/icra_benchmark/**, all the configurations used for testing are shown.


## Improving SG-D3QN

<img src="doc/success.gif" width="1000" />

<img src="doc/time.gif" width="1000" />
## Improving other models

<img src="doc/models.gif" width="1000" />


## Acknowledge
This work is based on [CrowdNav](https://github.com/vita-epfl/CrowdNav) and [RelationalGraphLearning](https://github.com/ChanganVR/RelationalGraphLearning) and [SG-D3QN](https://github.com/nubot-nudt/SG-D3QN). The authors are thankful for their works and for making them available.

## Citation
If you use this work in your own research or wish to refer to the paper's results, please use the following BibTeX entries.
```bibtex
@inproceedings{martinez2023improving,
author = {Martinez-Baselga, Diego and Riazuelo, Luis and Montano, Luis},
booktitle = {2023 IEEE International Conference on Robotics and Automation (ICRA)},
title = {Improving robot navigation in crowded environments using intrinsic rewards},
year = {2023},
pages = {9428-9434},
doi = {10.1109/ICRA48891.2023.10160876},
url = {https://arxiv.org/abs/2302.06554}
}
12 changes: 12 additions & 0 deletions crowd_nav/SG-DQN/run_experiments_mprl.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
#!/bin/bash
day=`date +%m%d`
echo "The Script begin at $day"
# Script to reproduce results
for ((i=0;i<3;i+=1))
do
python train.py \
--policy model-predictive-rl \
--output_dir data/$day/mprl/$i \
--randomseed $i \
--config configs/icra_benchmark/mp_separate.py
done
27 changes: 27 additions & 0 deletions crowd_nav/SG-DQN/run_experiments_mprl_10.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
#!/bin/bash
day=`date +%m%d`
echo "The Script begin at $day"
a=0.2
b=-0.25
c=0.25
# Script to reproduce results
for ((i=0;i<10;i+=1))
do
python train.py \
--policy model-predictive-rl\
--output_dir data/$day/mprl/$i \
--randomseed $i \
--config configs/icra_benchmark/mp_separate.py \
--safe_weight 1.0 \
--goal_weight $a \
--re_collision $b \
--re_arrival $c \
--human_num 10

# python train.py \
# --policy model-predictive-rl \
# --output_dir data/$day/mprl/$i \
# --randomseed $i \
# --config configs/icra_benchmark/mp_separate.py
done

18 changes: 18 additions & 0 deletions crowd_nav/SG-DQN/run_experiments_td3rl.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
#!/bin/bash
day=`date +%m%d`
echo "The Script begin at $day"
# Script to reproduce results
for ((i=0;i<10;i+=1))
do
python train.py \
--policy td3_rl \
--output_dir data/$day/td3/$i \
--randomseed $i \
--config configs/icra_benchmark/td3.py

python train.py \
--policy model-predictive-rl \
--output_dir data/$day/mprl/$i \
--randomseed $i \
--config configs/icra_benchmark/mp_separate.py
done
27 changes: 27 additions & 0 deletions crowd_nav/SG-DQN/run_experiments_tsrl.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
#!/bin/bash
day=`date +%m%d`
echo "The Script begin at $day"
a=0.2
b=-0.25
c=0.25
# Script to reproduce results
for ((i=0;i<10;i+=1))
do
python train.py \
--policy tree-search-rl \
--output_dir data/$day/tsrl/$i \
--randomseed $i \
--config configs/icra_benchmark/ts_separate.py \
--safe_weight 1.0 \
--goal_weight $a \
--re_collision $b \
--re_arrival $c \
--human_num 10

# python train.py \
# --policy model-predictive-rl \
# --output_dir data/$day/mprl/$i \
# --randomseed $i \
# --config configs/icra_benchmark/mp_separate.py
done

Empty file added crowd_nav/__init__.py
Empty file.
Empty file added crowd_nav/configs/__init__.py
Empty file.
Empty file.
19 changes: 19 additions & 0 deletions crowd_nav/configs/icra_benchmark/cadrl.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
from crowd_nav.configs.icra_benchmark.config import BaseEnvConfig, BasePolicyConfig, BaseTrainConfig, Config


class EnvConfig(BaseEnvConfig):
def __init__(self, debug=False):
super(EnvConfig, self).__init__(debug)


class PolicyConfig(BasePolicyConfig):
def __init__(self, debug=False):
super(PolicyConfig, self).__init__(debug)
self.name = 'cadrl'
self.use_noisy_net = False



class TrainConfig(BaseTrainConfig):
def __init__(self, debug=False):
super(TrainConfig, self).__init__(debug)
Loading

0 comments on commit 5d33473

Please sign in to comment.