Skip to content

Yangxinyee/Improved-RelationalGraphLearning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

I optimized its code for speed. Training on GPUs and CPUs is about 4 times faster than previous code.

ImporvedRelationalGraphLearning

This repository contains the codes for our paper, which is accepted at IROS 2020. For more details, please refer to the paper.

Abstract

We present a relational graph learning approach for robotic crowd navigation using model-based deep reinforcement learning that plans actions by looking into the future. Our approach reasons about the relations between all agents based on their latent features and uses a Graph Convolutional Network to encode higher-order interactions in each agent's state representation, which is subsequently leveraged for state prediction and value estimation. The ability to predict human motion allows us to perform multi-step lookahead planning, taking into account the temporal evolution of human crowds. We evaluate our approach against a state-of-the-art baseline for crowd navigation and ablations of our model to demonstrate that navigation with our approach is more efficient, results in fewer collisions, and avoids failure cases involving oscillatory and freezing behaviors.

Method Overview

Setup

  1. Install Python-RVO2 library
  2. Install socialforce library
  3. Install crowd_sim and crowd_nav into pip
pip install -e .

Getting Started

This repository are organized in two parts: crowd_sim/ folder contains the simulation environment and crowd_nav/ folder contains codes for training and testing the policies. Details of the simulation framework can be found here. Below are the instructions for training and testing policies, and they should be executed inside the crowd_nav/ folder.

  1. Train a policy.
python train.py --policy rgl
  1. Test policies with 500 test cases.
python test.py --policy rgl --model_dir data/output --phase test
  1. Run policy for one episode and visualize the result.
python test.py --policy rgl --model_dir data/output --phase test --visualize --test_case 0
  1. Plot training curve
python utils/plot.py data/output/output.log

Video Demo

About

Improved RelationalGraphLearning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages