-
Notifications
You must be signed in to change notification settings - Fork 94
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* feat: initial readme revamp * chore: some readme fixes * fix: convert images to png * chore: changes around performance plots in readme * chore: add more benchmark images * chore: update images * chore: rename legend * chore: add legend to readme * chore: move install + getting started near the top * feat: update system readmes * chore: a note on ISAC * chore: update branch naming convention doc * chore: some readme updates * feat: switch detailed install instruction to uv * docs: python badge * wip: system level docs * feat: readme badges * chore: add speed plot and move tables out of collapsible * fix: run pre commits * fix: tests budge link * feat: system level configs * chore: github math render fix * chore: github math render fix and linting * docs: add links to system readmes, papers and hydra * docs: qlearning paper links * docs: reword sebulba section to be distribution architectures * docs: change reference to sable paper * docs: clarify distribution architectures that are support for different envs * docs: general spelling mistake fixes and relative links to docs and files * docs: typo fixes * docs: replace absolute website links with relative links * docs: sable diagram caption * docs: sable caption math * docs: another sable diagram caption fix * docs: sable diagram math render * docs: add environment code and paper links --------- Co-authored-by: RuanJohn <[email protected]> Co-authored-by: OmaymaMahjoub <[email protected]> Co-authored-by: Ruan de Kock <[email protected]>
- Loading branch information
1 parent
ae736ff
commit 4076c7c
Showing
49 changed files
with
184 additions
and
271 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,6 @@ | ||
# Multi-agent Transformer | ||
|
||
We provide an implementation of the Multi-agent Transformer algorithm in JAX. MAT casts cooperative multi-agent reinforcement learning as a sequence modelling problem where agent observations and actions are treated as a sequence. At each timestep the observations of all agents are encoded and then these encoded observations are used for auto-regressive action selection. | ||
|
||
## Relevant paper: | ||
* [Multi-Agent Reinforcement Learning is a Sequence Modeling Problem](https://arxiv.org/pdf/2205.14953) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
# Proximal Policy Optimization | ||
|
||
We provide the following four multi-agent extensions to [PPO](https://arxiv.org/pdf/1707.06347) following the Anakin architecture. | ||
|
||
* [ff-IPPO](../../systems/ppo/anakin/ff_ippo.py) | ||
* [ff-MAPPO](../../systems/ppo/anakin/ff_mappo.py) | ||
* [rec-IPPO](../../systems/ppo/anakin/rec_ippo.py) | ||
* [rec-MAPPO](../../systems/ppo/anakin/rec_mappo.py) | ||
|
||
In all cases IPPO implies that it is an implementation following the independent learners MARL paradigm while MAPPO implies that the implementation follows the centralised training with decentralised execution paradigm by having a centralised critic during training. The `ff` or `rec` suffixes in the system names implies that the policy networks are MLPs or have a [GRU](https://arxiv.org/pdf/1406.1078) memory module to help learning despite partial observability in the environment. | ||
|
||
In addition to the Anakin-based implementations, we also include a Sebulba-based implementation of [ff-IPPO](../../systems/ppo/sebulba/ff_ippo.py) which can be used on environments that are not written in JAX and adhere to the Gymnasium API. | ||
|
||
## Relevant papers: | ||
* [Proximal Policy Optimization Algorithms](https://arxiv.org/pdf/1707.06347) | ||
* [The Surprising Effectiveness of PPO in Cooperative Multi-Agent Games](https://arxiv.org/pdf/2103.01955) | ||
* [Is Independent Learning All You Need in the StarCraft Multi-Agent Challenge?](https://arxiv.org/pdf/2011.09533) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
# Q Learning | ||
|
||
We provide two Q-Learning based systems that follow the independent learners and centralised training with decentralised execution paradigms: | ||
|
||
* [rec-IQL](../../systems/q_learning/anakin/rec_iql.py) | ||
* [rec-QMIX](../../systems/q_learning/anakin/rec_qmix.py) | ||
|
||
`rec-IQL` is a multi-agent version of DQN that uses double DQN and has a GRU memory module and `rec-QMIX` is an implementation of QMIX in JAX that uses monontic value function decomposition. | ||
|
||
## Relevant papers: | ||
* [Playing Atari with Deep Reinforcement Learning](https://arxiv.org/pdf/1312.5602) | ||
* [Multiagent Cooperation and Competition with Deep Reinforcement Learning](https://arxiv.org/pdf/1511.08779) | ||
* [QMIX: Monotonic Value Function Factorisation for | ||
Deep Multi-Agent Reinforcement Learning](https://arxiv.org/pdf/1803.11485) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
# Sable | ||
|
||
Sable is an algorithm that was developed by the research team at InstaDeep. It also casts MARL as a sequence modelling problem and leverages the [advantage decompostion theorem](https://arxiv.org/pdf/2108.08612) through auto-regressive action selection for convergence guarantees and can scale to thousands of agents by leveraging the memory efficiency of Retentive Networks. | ||
|
||
We provide two Anakin based implementations of Sable: | ||
* [ff-sable](../../systems/sable/anakin/ff_sable.py) | ||
* [rec-sable](../../systems/sable/anakin/rec_sable.py) | ||
|
||
Here the `ff` suffix implies that the algorithm retains no memory over time but treats only the agents as the sequence dimension while `rec` implies that the algorithms maintains memory over both agents and time for long context memory in partially observable environments. | ||
|
||
For an overview of how the algorithm works, please see the diagram below. For a more detailed overview please see our associated [paper](https://arxiv.org/pdf/2410.01706). | ||
|
||
<p align="center"> | ||
<a href="../../../docs/images/algo_images/sable-arch.png"> | ||
<img src="../../../docs/images/algo_images/sable-arch.png" alt="Sable Arch" width="80%"/> | ||
</a> | ||
</p> | ||
|
||
*Sable architecture and execution.* The encoder receives all agent observations $o_t^1,\dots,o_t^N$ from the current timestep $t$ along with a hidden state $h\_{t-1}^{\text{enc}}$ representing past timesteps and produces encoded observations $\hat{o}\_t^1,\dots,\hat{o}\_t^N$, observation-values $v \left( \hat{o}\_t^1 \right),\dots,v \left( \hat{o}\_t^N \right) $, and a new hidden state $h_t^{\text{enc}}$. | ||
The decoder performs recurrent retention over the current action $a_t^{m-1}$, followed by cross attention with the encoded observations, producing the next action $a_t^m$. The initial hidden states for recurrence over agents in the decoder at the current timestep are $( h\_{t-1}^{\text{dec}\_1},h\_{t-1}^{\text{dec}\_2})$, and by the end of the decoding process, it generates the updated hidden states $(h_t^{\text{dec}_1},h_t^{\text{dec}_2})$. | ||
|
||
## Relevant paper: | ||
* [Performant, Memory Efficient and Scalable Multi-Agent Reinforcement Learning](https://arxiv.org/pdf/2410.01706) | ||
* [Retentive Network: A Successor to Transformer for Large Language Models](https://arxiv.org/pdf/2307.08621) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
# Soft Actor-Critic | ||
|
||
We provide the following three multi-agent extensions to the Soft Actor-Critic (SAC) algorithm. | ||
|
||
* [ff-ISAC](../../systems/sac/anakin/ff_isac.py) | ||
* [ff-MASAC](../../systems/sac/anakin/ff_masac.py) | ||
* [ff-HASAC](../../systems/sac/anakin/ff_hasac.py) | ||
|
||
`ISAC` is an implementation following the independent learners MARL paradigm while `MASAC` is an implementation that follows the centralised training with decentralised execution paradigm by having a centralised critic during training. `HASAC` follows the heterogeneous agent learning paradigm through sequential policy updates. The `ff` prefix to the algorithm names indicate that the algorithms use MLP-based policy networks. | ||
|
||
## Relevant papers | ||
* [Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor](https://arxiv.org/pdf/1801.01290) | ||
* [Multi-Agent Actor-Critic for Mixed | ||
Cooperative-Competitive Environments](https://arxiv.org/pdf/1706.02275) | ||
* [Robust Multi-Agent Control via Maximum Entropy | ||
Heterogeneous-Agent Reinforcement Learning](https://arxiv.org/pdf/2306.10715) |