Skip to content

A simple interface to instantiate Reinforcement Learning environments with SUMO for Traffic Signal Control. Compatible with Gym Env from OpenAI and MultiAgentEnv from RLlib.

License

Notifications You must be signed in to change notification settings

albenbagabaldo/sumo-rl

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

62 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SUMO-RL

SUMO-RL provides a simple interface to instantiate Reinforcement Learning environments with SUMO v.1.2.0 for Traffic Signal Control.

The main class SumoEnvironment inherits MultiAgentEnv from RLlib.
If instantiated with parameter 'single-agent=True', it behaves like a regular Gym Env from OpenAI.
TrafficSignal is responsible for retrieving information and actuating on traffic lights using TraCI API.

Goals of this repository:

  • Provide a simple interface to work with Reinforcement Learning for Traffic Signal Control using SUMO.
  • Support Multiagent RL.
  • Compatibility with Gym Env and popular RL libraries like openAI baselines and RLlib.
  • Easy customisation: state and reward definitions are easily modifiable.

Install

To install SUMO v1.2.0:

sudo add-apt-repository ppa:sumo/stable
sudo apt-get update
sudo apt-get install sumo sumo-tools sumo-doc 

Don't forget to set SUMO_HOME variable (default sumo installation path is /usr/share/sumo)

echo 'export SUMO_HOME="/usr/share/sumo"' >> ~/.bashrc
source ~/.bashrc

To install sumo_rl package:

pip3 install -e .

Examples

Check experiments to see how to instantiate a SumoEnvironment and use it with your RL algorithm.

Q-learning in a one-way single intersection:

python3 experiments/ql_single-intersection.py 

RLlib A3C multiagent in a 4x4 grid:

python3 experiments/a3c_4x4grid.py

stable-baselines A2C in a 2-way single intersection:

python3 experiments/a2c_2way-single-intersection.py

To plot results:

python3 outputs/plot.py -f outputs/2way-single-intersection/a3c 

alt text

Cite

If you use this repository in your research, please cite:

@misc{sumorl,
    author = {Lucas N. Alegre},
    title = {SUMO-RL},
    year = {2019},
    publisher = {GitHub},
    journal = {GitHub repository},
    howpublished = {\url{https://github.com/LucasAlegre/sumo-rl}},
}

About

A simple interface to instantiate Reinforcement Learning environments with SUMO for Traffic Signal Control. Compatible with Gym Env from OpenAI and MultiAgentEnv from RLlib.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%