MORL-Baselines is a library of Multi-Objective Reinforcement Learning (MORL) algorithms. This repository aims to contain reliable MORL algorithms implementations in PyTorch.
It strictly follows MO-Gymnasium API, which differs from the standard Gymnasium API only in that the environment returns a numpy array as the reward.
For details on multi-objective MDPs (MOMDPs) and other MORL definitions, we suggest reading A practical guide to multi-objective reinforcement learning and planning. An overview of some techniques used in various MORL algorithms is also provided in Multi-Objective Reinforcement Learning Based on Decomposition: A Taxonomy and Framework.
A tutorial on MO-Gymnasium and MORL-Baselines is also available:
- Single and multi-policy algorithms under both SER and ESR criteria are implemented.
- All algorithms follow the MO-Gymnasium API.
- Performances are automatically reported in Weights and Biases dashboards.
- Linting and formatting are enforced by pre-commit hooks.
- Code is well documented.
- All algorithms are automatically tested.
- Utility functions are provided e.g. pareto pruning, experience buffers, etc.
- Performances have been tested and reported in a reproducible manner.
- Hyperparameter optimization available.
Name | Single/Multi-policy | ESR/SER | Observation space | Action space | Paper |
---|---|---|---|---|---|
GPI-LS + GPI-PD | Multi | SER | Continuous | Discrete / Continuous | Paper and Supplementary Materials |
MORL/D | Multi | / | / | / | Paper |
Envelope Q-Learning | Multi | SER | Continuous | Discrete | Paper |
CAPQL | Multi | SER | Continuous | Continuous | Paper |
PGMORL 1 | Multi | SER | Continuous | Continuous | Paper / Supplementary Materials |
Pareto Conditioned Networks (PCN) | Multi | SER/ESR 2 | Continuous | Discrete / Continuous | Paper |
Pareto Q-Learning | Multi | SER | Discrete | Discrete | Paper |
MO Q learning | Single | SER | Discrete | Discrete | Paper |
MPMOQLearning (outer loop MOQL) | Multi | SER | Discrete | Discrete | Paper |
Optimistic Linear Support (OLS) | Multi | SER | / | / | Section 3.3 of the thesis |
Expected Utility Policy Gradient (EUPG) | Single | ESR | Discrete | Discrete | Paper |
1: Currently, PGMORL is limited to environments with 2 objectives.
2: PCN assumes environments with deterministic transitions.
MORL-Baselines participates to Open RL Benchmark which contains tracked experiments from popular RL libraries such as cleanRL and Stable Baselines 3.
We have run experiments of our algorithms on various environments from MO-Gymnasium. The results can be found here: https://wandb.ai/openrlbenchmark/MORL-Baselines. An issue tracking all the settings is available at #43. Some design documentation for the experimentation protocol are also available on our Documentation website.
An example visualization of our dashboards with Pareto support is shown below:
As much as possible, this repo tries to follow the single-file implementation rule for all algorithms. The repo's structure is as follows:
examples/
contains a set of examples to use MORL Baselines with MO-Gymnasium environments.common/
contains the implementation recurring concepts: replay buffers, neural nets, etc. See the documentation for more details.multi_policy/
contains the implementations of multi-policy algorithms.single_policy/
contains the implementations of single-policy algorithms (ESR and SER).
If you use MORL-Baselines in your research, please cite our NeurIPS 2023 paper:
@inproceedings{felten_toolkit_2023,
author = {Felten, Florian and Alegre, Lucas N. and Now{\'e}, Ann and Bazzan, Ana L. C. and Talbi, El Ghazali and Danoy, Gr{\'e}goire and Silva, Bruno Castro da},
title = {A Toolkit for Reliable Benchmarking and Research in Multi-Objective Reinforcement Learning},
booktitle = {Proceedings of the 37th Conference on Neural Information Processing Systems ({NeurIPS} 2023)},
year = {2023}
}
MORL-Baselines is currently maintained by Florian Felten (@ffelten) and Lucas N. Alegre (@LucasAlegre).
This repository is open to contributions and we are always happy to receive new algorithms, bug fixes, or features. If you want to contribute, you can join our Discord server and discuss your ideas with us. You can also open an issue or a pull request directly.
- Willem Röpke, for his implementation of Pareto Q-Learning (@wilrop)
- Mathieu Reymond, for providing us with the original implementation of PCN.
- Denis Steckelmacher and Conor F. Hayes, for providing us with the original implementation of EUPG.