Acme includes a number of pre-built agents listed below. All are provided as single-process agents, but we also include a distributed implementation using Launchpad, with more examples coming soon. Distributed agents share the exact same learning and acting code as their single-process counterparts.
We've also listed the agents below in separate sections based on their different use cases, however these distinction are often subtle. For more information on each implementation see the relevant agent-specific README.
Acme has long had a focus on continuous control agents (i.e. settings where the action space consists of a continuous space). The following agents focus on this setting:
We also include a number of agents built with discrete action-spaces in mind. Note that the distinction between these agents and the continuous agents listed can be somewhat arbitrary. E.g. Impala could be implemented for continuous action spaces as well, but here we focus on a discrete-action variant.
Agent | Paper | Code |
---|---|---|
Deep Q-Networks (DQN) | Horgan et al., 2018 | |
Importance-Weighted Actor-Learner Architectures (IMPALA) | Espeholt et al., 2018 | |
Recurrent Replay Distributed DQN (R2D2) | Kapturowski et al., 2019 |
The structure of Acme also lends itself quite nicely to "learner-only" algorithm for use in Batch RL (with no environment interactions). Implemented algorithms include:
Agent | Paper | Code |
---|---|---|
Behavior Cloning (BC) | - |
Acme also easily allows active data acquisition to be combined with data from demonstrations. Such algorithms include:
Agent | Paper | Code |
---|---|---|
Deep Q-Learning from Demonstrations (DQfD) | Hester et al., 2017 | |
Recurrent Replay Distributed DQN from Demonstratinos (R2D3) | Gulcehre et al., 2020 |
Finally, Acme also includes a variant of MCTS which can be used for model-based RL using a given or learned simulator
Agent | Paper | Code |
---|---|---|
Monte-Carlo Tree Search (MCTS) | Silver et al., 2018 |