SUMO-RL provides a simple interface to instantiate Reinforcement Learning environments with SUMO v.1.2.0 for Traffic Signal Control.
The main class SumoEnvironment inherits MultiAgentEnv from RLlib.
If instantiated with parameter 'single-agent=True', it behaves like a regular Gym Env from OpenAI.
TrafficSignal is responsible for retrieving information and actuating on traffic lights using TraCI API.
Goals of this repository:
- Provide a simple interface to work with Reinforcement Learning for Traffic Signal Control using SUMO.
- Support Multiagent RL.
- Compatibility with Gym Env and popular RL libraries like openAI baselines and RLlib.
- Easy customisation: state and reward definitions are easily modifiable.
sudo add-apt-repository ppa:sumo/stable
sudo apt-get update
sudo apt-get install sumo sumo-tools sumo-doc
Don't forget to set SUMO_HOME variable (default sumo installation path is /usr/share/sumo)
echo 'export SUMO_HOME="/usr/share/sumo"' >> ~/.bashrc
source ~/.bashrc
pip3 install -e .
Check experiments to see how to instantiate a SumoEnvironment and use it with your RL algorithm.
Q-learning in a one-way single intersection:
python3 experiments/ql_single-intersection.py
RLlib A3C multiagent in a 4x4 grid:
python3 experiments/a3c_4x4grid.py
stable-baselines A2C in a 2-way single intersection:
python3 experiments/a2c_2way-single-intersection.py
python3 outputs/plot.py -f outputs/2way-single-intersection/a3c
If you use this repository in your research, please cite:
@misc{sumorl,
author = {Lucas N. Alegre},
title = {SUMO-RL},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/LucasAlegre/sumo-rl}},
}