I optimized its code for speed. Training on GPUs and CPUs is about 4 times faster than previous code.
This repository contains the codes for our paper, which is accepted at IROS 2020. For more details, please refer to the paper.
We present a relational graph learning approach for robotic crowd navigation using model-based deep reinforcement learning that plans actions by looking into the future. Our approach reasons about the relations between all agents based on their latent features and uses a Graph Convolutional Network to encode higher-order interactions in each agent's state representation, which is subsequently leveraged for state prediction and value estimation. The ability to predict human motion allows us to perform multi-step lookahead planning, taking into account the temporal evolution of human crowds. We evaluate our approach against a state-of-the-art baseline for crowd navigation and ablations of our model to demonstrate that navigation with our approach is more efficient, results in fewer collisions, and avoids failure cases involving oscillatory and freezing behaviors.
- Install Python-RVO2 library
- Install socialforce library
- Install crowd_sim and crowd_nav into pip
pip install -e .
This repository are organized in two parts: crowd_sim/ folder contains the simulation environment and crowd_nav/ folder contains codes for training and testing the policies. Details of the simulation framework can be found here. Below are the instructions for training and testing policies, and they should be executed inside the crowd_nav/ folder.
- Train a policy.
python train.py --policy rgl
- Test policies with 500 test cases.
python test.py --policy rgl --model_dir data/output --phase test
- Run policy for one episode and visualize the result.
python test.py --policy rgl --model_dir data/output --phase test --visualize --test_case 0
- Plot training curve
python utils/plot.py data/output/output.log