Skip to content

Commit

Permalink
feat: readme revamp (#1108)
Browse files Browse the repository at this point in the history
* feat: initial readme revamp

* chore: some readme fixes

* fix: convert images to png

* chore: changes around performance plots in readme

* chore: add more benchmark images

* chore: update images

* chore: rename legend

* chore: add legend to readme

* chore: move install + getting started near the top

* feat: update system readmes

* chore: a note on ISAC

* chore: update branch naming convention doc

* chore: some readme updates

* feat: switch detailed install instruction to uv

* docs: python badge

* wip: system level docs

* feat: readme badges

* chore: add speed plot and move tables out of collapsible

* fix: run pre commits

* fix: tests budge link

* feat: system level configs

* chore: github math render fix

* chore: github math render fix and linting

* docs: add links to system readmes, papers and hydra

* docs: qlearning paper links

* docs: reword sebulba section to be distribution architectures

* docs: change reference to sable paper

* docs: clarify distribution architectures that are support for different envs

* docs: general spelling mistake fixes and relative links to docs and files

* docs: typo fixes

* docs: replace absolute website links with relative links

* docs: sable diagram caption

* docs: sable caption math

* docs: another sable diagram caption fix

* docs: sable diagram math render

* docs: add environment code and paper links

---------

Co-authored-by: RuanJohn <[email protected]>
Co-authored-by: OmaymaMahjoub <[email protected]>
Co-authored-by: Ruan de Kock <[email protected]>
  • Loading branch information
4 people authored Dec 19, 2024
1 parent ae736ff commit 4076c7c
Show file tree
Hide file tree
Showing 49 changed files with 184 additions and 271 deletions.
237 changes: 94 additions & 143 deletions README.md

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ pre-commit run --all-files

## Naming Conventions
### Branch Names
We name our feature and bugfix branches as follows - `feature/[BRANCH-NAME]`, `bugfix/[BRANCH-NAME]` or `maintenance/[BRANCH-NAME]`. Please ensure `[BRANCH-NAME]` is hyphen delimited.
We name our feature and bugfix branches as follows - `feat/[BRANCH-NAME]`, `fix/[BRANCH-NAME]`. Please ensure `[BRANCH-NAME]` is hyphen delimited.
### Commit Messages
We follow the conventional commits [standard](https://www.conventionalcommits.org/en/v1.0.0/).

Expand Down
22 changes: 12 additions & 10 deletions docs/DETAILED_INSTALL.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@
# Detailed installation guide

### Conda virtual environment
We recommend using `conda` for package management. These instructions should allow you to install and run mava.
We recommend using [uv](https://docs.astral.sh/uv/) for package management. These instructions should allow you to install and run mava.

1. Create and activate a virtual environment
1. Install `uv`
```bash
conda create -n mava python=3.12
conda activate mava
curl -LsSf https://astral.sh/uv/install.sh | sh
```

2. Clone mava
Expand All @@ -15,19 +14,22 @@ git clone https://github.com/instadeepai/Mava.git
cd mava
```

3. Install the dependencies
3. Create and activate a virtual environment and install requirements
```bash
pip install -e .
uv venv -p=3.12
source .venv/bin/activate
uv pip install -e .
```

4. Install jax on your accelerator. The example below is for an NVIDIA GPU, please the [official install guide](https://github.com/google/jax#installation) for other accelerators
4. Install jax on your accelerator. The example below is for an NVIDIA GPU, please the [official install guide](https://github.com/google/jax#installation) for other accelerators.
Note that the Jax version we use will change over time, please check the [requirements.txt](../requirements/requirements.txt) for our latest tested Jax verion.
```bash
pip install "jax[cuda12]==0.4.30"
uv pip install "jax[cuda12]==0.4.30"
```

5. Run a system!
```bash
python mava/systems/ppo/ff_ippo.py env=rware
python mava/systems/ppo/anakin/ff_ippo.py env=rware
```

### Docker
Expand All @@ -50,4 +52,4 @@ If you are having trouble with dependencies we recommend using our docker image

For example, `make run example=mava/systems/ppo/ff_ippo.py`.

Alternatively, run bash inside a docker container with mava installed by running `make bash`, and from there systems can be run as follows: `python dir/to/system.py`.
Alternatively, run bash inside a docker container with Mava installed by running `make bash`, and from there systems can be run as follows: `python dir/to/system.py`.
Binary file added docs/images/algo_images/sable-arch.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/benchmark_results/connector.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/benchmark_results/lbf.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/benchmark_results/legend.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/benchmark_results/mabrax.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/benchmark_results/mpe.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/benchmark_results/rware.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/benchmark_results/smax.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed docs/images/lbf_results/15x15-4p-3f_rec_mappo.png
Binary file not shown.
Binary file not shown.
Binary file removed docs/images/lbf_results/legend_rec_mappo.png
Binary file not shown.
Binary file removed docs/images/rware_results/ff_ippo/small-4ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/ff_ippo/tiny-2ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/ff_ippo/tiny-4ag.png
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file removed docs/images/rware_results/ff_mappo/small-4ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/ff_mappo/tiny-2ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/ff_mappo/tiny-4ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/rec_ippo/small-4ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/rec_ippo/tiny-2ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/rec_ippo/tiny-4ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/rec_mappo/small-4ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/rec_mappo/tiny-2ag.png
Diff not rendered.
Binary file removed docs/images/rware_results/rec_mappo/tiny-4ag.png
Diff not rendered.
Binary file removed docs/images/smax_results/10m_vs_11m.png
Diff not rendered.
Binary file removed docs/images/smax_results/27m_vs_30m.png
Diff not rendered.
Binary file removed docs/images/smax_results/2s3z.png
Diff not rendered.
Binary file removed docs/images/smax_results/3s5z.png
Diff not rendered.
Binary file removed docs/images/smax_results/3s5z_vs_3s6z.png
Diff not rendered.
Binary file removed docs/images/smax_results/3s_vs_5z.png
Diff not rendered.
Binary file removed docs/images/smax_results/5m_vs_6m.png
Diff not rendered.
Binary file removed docs/images/smax_results/6h_vs_8z.png
Diff not rendered.
Binary file removed docs/images/smax_results/legend.png
Diff not rendered.
Diff not rendered.
Binary file removed docs/images/speed_results/mava_sps_results.png
Diff not rendered.
Binary file added docs/images/speed_results/speed.png
74 changes: 0 additions & 74 deletions docs/jumanji_rware_comparison.md

This file was deleted.

43 changes: 0 additions & 43 deletions docs/smax_benchmark.md

This file was deleted.

6 changes: 6 additions & 0 deletions mava/systems/mat/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Multi-agent Transformer

We provide an implementation of the Multi-agent Transformer algorithm in JAX. MAT casts cooperative multi-agent reinforcement learning as a sequence modelling problem where agent observations and actions are treated as a sequence. At each timestep the observations of all agents are encoded and then these encoded observations are used for auto-regressive action selection.

## Relevant paper:
* [Multi-Agent Reinforcement Learning is a Sequence Modeling Problem](https://arxiv.org/pdf/2205.14953)
17 changes: 17 additions & 0 deletions mava/systems/ppo/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Proximal Policy Optimization

We provide the following four multi-agent extensions to [PPO](https://arxiv.org/pdf/1707.06347) following the Anakin architecture.

* [ff-IPPO](../../systems/ppo/anakin/ff_ippo.py)
* [ff-MAPPO](../../systems/ppo/anakin/ff_mappo.py)
* [rec-IPPO](../../systems/ppo/anakin/rec_ippo.py)
* [rec-MAPPO](../../systems/ppo/anakin/rec_mappo.py)

In all cases IPPO implies that it is an implementation following the independent learners MARL paradigm while MAPPO implies that the implementation follows the centralised training with decentralised execution paradigm by having a centralised critic during training. The `ff` or `rec` suffixes in the system names implies that the policy networks are MLPs or have a [GRU](https://arxiv.org/pdf/1406.1078) memory module to help learning despite partial observability in the environment.

In addition to the Anakin-based implementations, we also include a Sebulba-based implementation of [ff-IPPO](../../systems/ppo/sebulba/ff_ippo.py) which can be used on environments that are not written in JAX and adhere to the Gymnasium API.

## Relevant papers:
* [Proximal Policy Optimization Algorithms](https://arxiv.org/pdf/1707.06347)
* [The Surprising Effectiveness of PPO in Cooperative Multi-Agent Games](https://arxiv.org/pdf/2103.01955)
* [Is Independent Learning All You Need in the StarCraft Multi-Agent Challenge?](https://arxiv.org/pdf/2011.09533)
14 changes: 14 additions & 0 deletions mava/systems/q_learning/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Q Learning

We provide two Q-Learning based systems that follow the independent learners and centralised training with decentralised execution paradigms:

* [rec-IQL](../../systems/q_learning/anakin/rec_iql.py)
* [rec-QMIX](../../systems/q_learning/anakin/rec_qmix.py)

`rec-IQL` is a multi-agent version of DQN that uses double DQN and has a GRU memory module and `rec-QMIX` is an implementation of QMIX in JAX that uses monontic value function decomposition.

## Relevant papers:
* [Playing Atari with Deep Reinforcement Learning](https://arxiv.org/pdf/1312.5602)
* [Multiagent Cooperation and Competition with Deep Reinforcement Learning](https://arxiv.org/pdf/1511.08779)
* [QMIX: Monotonic Value Function Factorisation for
Deep Multi-Agent Reinforcement Learning](https://arxiv.org/pdf/1803.11485)
24 changes: 24 additions & 0 deletions mava/systems/sable/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Sable

Sable is an algorithm that was developed by the research team at InstaDeep. It also casts MARL as a sequence modelling problem and leverages the [advantage decompostion theorem](https://arxiv.org/pdf/2108.08612) through auto-regressive action selection for convergence guarantees and can scale to thousands of agents by leveraging the memory efficiency of Retentive Networks.

We provide two Anakin based implementations of Sable:
* [ff-sable](../../systems/sable/anakin/ff_sable.py)
* [rec-sable](../../systems/sable/anakin/rec_sable.py)

Here the `ff` suffix implies that the algorithm retains no memory over time but treats only the agents as the sequence dimension while `rec` implies that the algorithms maintains memory over both agents and time for long context memory in partially observable environments.

For an overview of how the algorithm works, please see the diagram below. For a more detailed overview please see our associated [paper](https://arxiv.org/pdf/2410.01706).

<p align="center">
<a href="../../../docs/images/algo_images/sable-arch.png">
<img src="../../../docs/images/algo_images/sable-arch.png" alt="Sable Arch" width="80%"/>
</a>
</p>

*Sable architecture and execution.* The encoder receives all agent observations $o_t^1,\dots,o_t^N$ from the current timestep $t$ along with a hidden state $h\_{t-1}^{\text{enc}}$ representing past timesteps and produces encoded observations $\hat{o}\_t^1,\dots,\hat{o}\_t^N$, observation-values $v \left( \hat{o}\_t^1 \right),\dots,v \left( \hat{o}\_t^N \right) $, and a new hidden state $h_t^{\text{enc}}$.
The decoder performs recurrent retention over the current action $a_t^{m-1}$, followed by cross attention with the encoded observations, producing the next action $a_t^m$. The initial hidden states for recurrence over agents in the decoder at the current timestep are $( h\_{t-1}^{\text{dec}\_1},h\_{t-1}^{\text{dec}\_2})$, and by the end of the decoding process, it generates the updated hidden states $(h_t^{\text{dec}_1},h_t^{\text{dec}_2})$.

## Relevant paper:
* [Performant, Memory Efficient and Scalable Multi-Agent Reinforcement Learning](https://arxiv.org/pdf/2410.01706)
* [Retentive Network: A Successor to Transformer for Large Language Models](https://arxiv.org/pdf/2307.08621)
16 changes: 16 additions & 0 deletions mava/systems/sac/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Soft Actor-Critic

We provide the following three multi-agent extensions to the Soft Actor-Critic (SAC) algorithm.

* [ff-ISAC](../../systems/sac/anakin/ff_isac.py)
* [ff-MASAC](../../systems/sac/anakin/ff_masac.py)
* [ff-HASAC](../../systems/sac/anakin/ff_hasac.py)

`ISAC` is an implementation following the independent learners MARL paradigm while `MASAC` is an implementation that follows the centralised training with decentralised execution paradigm by having a centralised critic during training. `HASAC` follows the heterogeneous agent learning paradigm through sequential policy updates. The `ff` prefix to the algorithm names indicate that the algorithms use MLP-based policy networks.

## Relevant papers
* [Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor](https://arxiv.org/pdf/1801.01290)
* [Multi-Agent Actor-Critic for Mixed
Cooperative-Competitive Environments](https://arxiv.org/pdf/1706.02275)
* [Robust Multi-Agent Control via Maximum Entropy
Heterogeneous-Agent Reinforcement Learning](https://arxiv.org/pdf/2306.10715)

0 comments on commit 4076c7c

Please sign in to comment.