Skip to content

Commit

Permalink
style: Fix typos and add codespell pre-commit hook (#194)
Browse files Browse the repository at this point in the history
* fix: fix typos and add codespell pre-commit hook

---------

Co-authored-by: Jeroen Van Goey <[email protected]>
  • Loading branch information
Lookatator and BioGeek authored Sep 21, 2024
1 parent 6656f5e commit 69b781e
Show file tree
Hide file tree
Showing 60 changed files with 120 additions and 109 deletions.
10 changes: 10 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -45,3 +45,13 @@ repos:
rev: v1.11.2
hooks:
- id: mypy

- repo: https://github.com/codespell-project/codespell
rev: v2.3.0
hooks:
- id: codespell
name: codespell
description: Checks for common misspellings in text files.
entry: codespell
language: python
types: [text]
2 changes: 1 addition & 1 deletion docs/api_documentation/core/pbt.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

[PBT](https://arxiv.org/abs/1711.09846) is optimization method to jointly optimise a population of models and their hyperparameters to maximize performance.

To use PBT in QDax to train SAC, one can use the two following components (see [examples](../../examples/sac_pbt.ipynb) to see how to use the components appropriatly):
To use PBT in QDax to train SAC, one can use the two following components (see [examples](../../examples/sac_pbt.ipynb) to see how to use the components appropriately):

::: qdax.baselines.sac_pbt.PBTSAC

Expand Down
4 changes: 2 additions & 2 deletions docs/overview.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# QDax Overview

QDax has been designed to be modular yet flexible so it's easy for anyone to use and extend on the different state-of-the-art QD algortihms available.
QDax has been designed to be modular yet flexible so it's easy for anyone to use and extend on the different state-of-the-art QD algorithms available.
For instance, MAP-Elites is designed to work with a few modular and simple components: `container`, `emitter`, and `scoring_function`.

## Key concepts
Expand All @@ -17,7 +17,7 @@ The `scoring_function` defines the problem/task we want to solve and functions t
With this modularity, a user can easily swap out any one of the components and pass it to the `MAPElites` class, avoiding having to re-implement all the steps of the algorithm.

Under one layer of abstraction, users have a bit more flexibility. QDax has similarities to the simple and commonly found `ask`/`tell` interface. The `ask` function is similar to the `emit` function in QDax and the `tell` function is similar to the `update` function in QDax. Likewise, the `eval` of solutions is analogous to the `scoring function` in QDax.
More importantly, QDax handles the archive management which is the key idea of QD algorihtms and not present or needed in standard optimization algorihtms or evolutionary strategies.
More importantly, QDax handles the archive management which is the key idea of QD algorithms and not present or needed in standard optimization algorithms or evolutionary strategies.

## Code Example
```python
Expand Down
6 changes: 3 additions & 3 deletions examples/aurora.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
"# Optimizing with AURORA in Jax\n",
"\n",
"This notebook shows how to use QDax to find diverse and performing controllers in MDPs with [AURORA](https://arxiv.org/pdf/1905.11874.pdf).\n",
"It can be run locally or on Google Colab. We recommand to use a GPU. This notebook will show:\n",
"It can be run locally or on Google Colab. We recommend to use a GPU. This notebook will show:\n",
"\n",
"- how to define the problem\n",
"- how to create an emitter\n",
Expand Down Expand Up @@ -185,7 +185,7 @@
"metadata": {},
"outputs": [],
"source": [
"# Define the fonction to play a step with the policy in the environment\n",
"# Define the function to play a step with the policy in the environment\n",
"def play_step_fn(\n",
" env_state,\n",
" policy_params,\n",
Expand Down Expand Up @@ -323,7 +323,7 @@
"\n",
"@jax.jit\n",
"def update_scan_fn(carry: Any, unused: Any) -> Any:\n",
" \"\"\"Scan the udpate function.\"\"\"\n",
" \"\"\"Scan the update function.\"\"\"\n",
" (\n",
" repertoire,\n",
" random_key,\n",
Expand Down
4 changes: 2 additions & 2 deletions examples/cmaes.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
"source": [
"# Optimizing with CMA-ES in Jax\n",
"\n",
"This notebook shows how to use QDax to find performing parameters on Rastrigin and Sphere problems with [CMA-ES](https://arxiv.org/pdf/1604.00772.pdf). It can be run locally or on Google Colab. We recommand to use a GPU. This notebook will show:\n",
"This notebook shows how to use QDax to find performing parameters on Rastrigin and Sphere problems with [CMA-ES](https://arxiv.org/pdf/1604.00772.pdf). It can be run locally or on Google Colab. We recommend to use a GPU. This notebook will show:\n",
"\n",
"- how to define the problem\n",
"- how to create a CMA-ES optimizer\n",
Expand Down Expand Up @@ -216,7 +216,7 @@
" # sample\n",
" samples, random_key = cmaes.sample(state, random_key)\n",
"\n",
" # udpate\n",
" # update\n",
" state = cmaes.update(state, samples)\n",
"\n",
" # check stop condition\n",
Expand Down
6 changes: 3 additions & 3 deletions examples/cmame.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
"source": [
"# Optimizing with CMA-ME in Jax\n",
"\n",
"This notebook shows how to use QDax to find diverse and performing parameters on Rastrigin or Sphere problem with [CMA-ME](https://arxiv.org/pdf/1912.02400.pdf). It can be run locally or on Google Colab. We recommand to use a GPU. This notebook will show:\n",
"This notebook shows how to use QDax to find diverse and performing parameters on Rastrigin or Sphere problem with [CMA-ME](https://arxiv.org/pdf/1912.02400.pdf). It can be run locally or on Google Colab. We recommend to use a GPU. This notebook will show:\n",
"\n",
"- how to define the problem\n",
"- how to create a CMA-ME emitter\n",
Expand Down Expand Up @@ -207,7 +207,7 @@
"source": [
"random_key = jax.random.key(0)\n",
"# in CMA-ME settings (from the paper), there is no init population\n",
"# we multipy by zero to reproduce this setting\n",
"# we multiply by zero to reproduce this setting\n",
"initial_population = jax.random.uniform(random_key, shape=(batch_size, num_dimensions)) * 0.\n",
"\n",
"centroids = compute_euclidean_centroids(\n",
Expand Down Expand Up @@ -350,7 +350,7 @@
"axes[2].set_title(\"QD Score evolution during training\")\n",
"axes[2].set_aspect(0.95 / axes[2].get_data_ratio(), adjustable=\"box\")\n",
"\n",
"# udpate this variable to save your results locally\n",
"# update this variable to save your results locally\n",
"savefig = False\n",
"if savefig:\n",
" figname = \"cma_me_\" + optim_problem + \"_\" + str(num_dimensions) + \"_\" + emitter_type + \".png\"\n",
Expand Down
2 changes: 1 addition & 1 deletion examples/cmamega.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
"source": [
"# Optimizing with CMA-MEGA in Jax\n",
"\n",
"This notebook shows how to use QDax to find diverse and performing parameters on the Rastrigin problem with [CMA-MEGA](https://arxiv.org/pdf/2106.03894.pdf). It can be run locally or on Google Colab. We recommand to use a GPU. This notebook will show:\n",
"This notebook shows how to use QDax to find diverse and performing parameters on the Rastrigin problem with [CMA-MEGA](https://arxiv.org/pdf/2106.03894.pdf). It can be run locally or on Google Colab. We recommend to use a GPU. This notebook will show:\n",
"\n",
"- how to define the problem\n",
"- how to create a cma-mega emitter\n",
Expand Down
8 changes: 4 additions & 4 deletions examples/dads.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
"source": [
"# Training DADS with Jax\n",
"\n",
"This notebook shows how to use QDax to train [DADS](https://arxiv.org/abs/1907.01657) on a Brax environment. It can be run locally or on Google Colab. We recommand to use a GPU. This notebook will show:\n",
"This notebook shows how to use QDax to train [DADS](https://arxiv.org/abs/1907.01657) on a Brax environment. It can be run locally or on Google Colab. We recommend to use a GPU. This notebook will show:\n",
"- how to define an environment\n",
"- how to define a replay buffer\n",
"- how to create a dads instance\n",
Expand Down Expand Up @@ -88,7 +88,7 @@
"\n",
"Most hyperparameters are similar to those introduced in [SAC paper](https://arxiv.org/abs/1801.01290), [DIAYN paper](https://arxiv.org/abs/1802.06070) and [DADS paper](https://arxiv.org/abs/1907.01657).\n",
"\n",
"The parameter `descriptor_full_state` is less straightforward, it concerns the information used for diversity seeking and dynamics. In DADS, one can use the full state for diversity seeking, but one can also use a prior to focus on an interesting aspect of the state. Actually, priors are often used in experiments, for instance, focusing on the x/y position rather than the full position. When `descriptor_full_state` is set to True, it uses the full state, when it is set to False, it uses the 'state descriptor' retrieved by the environment. Hence, it is required that the environment has one. (All the `_uni`, `_omni` do, same for `anttrap`, `antmaze` and `pointmaze`.) In the future, we will add an option to use a prior function direclty on the full state."
"The parameter `descriptor_full_state` is less straightforward, it concerns the information used for diversity seeking and dynamics. In DADS, one can use the full state for diversity seeking, but one can also use a prior to focus on an interesting aspect of the state. Actually, priors are often used in experiments, for instance, focusing on the x/y position rather than the full position. When `descriptor_full_state` is set to True, it uses the full state, when it is set to False, it uses the 'state descriptor' retrieved by the environment. Hence, it is required that the environment has one. (All the `_uni`, `_omni` do, same for `anttrap`, `antmaze` and `pointmaze`.) In the future, we will add an option to use a prior function directly on the full state."
]
},
{
Expand Down Expand Up @@ -258,7 +258,7 @@
" deterministic=True,\n",
" env=eval_env,\n",
" skills=skills,\n",
" evaluation=True, # needed by normalizatoin mecanism\n",
" evaluation=True, # needed by normalizatoin mechanism\n",
")\n",
"\n",
"play_step = functools.partial(\n",
Expand Down Expand Up @@ -308,7 +308,7 @@
"source": [
"## Prepare last utils for the training loop\n",
"\n",
"Many Reinforcement Learning algorithm have similar training process, that can be divided in a precise training step that is repeted several times. Most of the differences are captured inside the `play_step` and in the `update` functions. Hence, once those are defined, the iteration works in the same way. For this reason, instead of coding the same function for each algorithm, we have created the `do_iteration_fn` that can be used by most of them. In the training script, the user just has to partial the function to give `play_step`, `update` plus a few other parameter."
"Many Reinforcement Learning algorithm have similar training process, that can be divided in a precise training step that is repeated several times. Most of the differences are captured inside the `play_step` and in the `update` functions. Hence, once those are defined, the iteration works in the same way. For this reason, instead of coding the same function for each algorithm, we have created the `do_iteration_fn` that can be used by most of them. In the training script, the user just has to partial the function to give `play_step`, `update` plus a few other parameter."
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions examples/dcrlme.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
"\n",
"This notebook shows how to use QDax to find diverse and performing controllers in MDPs with [Descriptor-Conditioned Reinforcement Learning MAP-Elites (DCRL-ME)](https://arxiv.org/abs/2401.08632).\n",
"This algorithm extends and improves upon [Descriptor-Conditioned Gradients MAP-Elites (DCG-ME)](https://dl.acm.org/doi/abs/10.1145/3583131.3590503)\n",
"It can be run locally or on Google Colab. We recommand to use a GPU. This notebook will show:\n",
"It can be run locally or on Google Colab. We recommend to use a GPU. This notebook will show:\n",
"\n",
"- how to define the problem\n",
"- how to create the DCRL emitter\n",
Expand Down Expand Up @@ -200,7 +200,7 @@
"metadata": {},
"outputs": [],
"source": [
"# Define the fonction to play a step with the policy in the environment\n",
"# Define the function to play a step with the policy in the environment\n",
"def play_step_fn(\n",
" env_state: EnvState, policy_params: Params, random_key: RNGKey\n",
") -> Tuple[EnvState, Params, RNGKey, DCRLTransition]:\n",
Expand Down
6 changes: 3 additions & 3 deletions examples/diayn.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
"source": [
"# Training DIAYN with Jax\n",
"\n",
"This notebook shows how to use QDax to train [DIAYN](https://arxiv.org/abs/1802.06070) on a Brax environment. It can be run locally or on Google Colab. We recommand to use a GPU. This notebook will show:\n",
"This notebook shows how to use QDax to train [DIAYN](https://arxiv.org/abs/1802.06070) on a Brax environment. It can be run locally or on Google Colab. We recommend to use a GPU. This notebook will show:\n",
"- how to define an environment\n",
"- how to define a replay buffer\n",
"- how to create a diayn instance\n",
Expand Down Expand Up @@ -89,7 +89,7 @@
"\n",
"Most hyperparameters are similar to those introduced in [SAC paper](https://arxiv.org/abs/1801.01290) and [DIAYN paper](https://arxiv.org/abs/1802.06070).\n",
"\n",
"The parameter `descriptor_full_state` is less straightforward, it concerns the information used for diversity seeking and discrimination. In DIAYN, one can use the full state for diversity seeking, but one can also use a prior to focus on an interesting aspect of the state. Actually, priors are often used in experiments, for instance, focusing on the x/y position rather than the full position. When `descriptor_full_state` is set to True, it uses the full state, when it is set to False, it uses the 'state descriptor' retrieved by the environment. Hence, it is required that the environment has one. (All the `_uni`, `_omni` do, same for `anttrap`, `antmaze` and `pointmaze`.) In the future, we will add an option to use a prior function direclty on the full state."
"The parameter `descriptor_full_state` is less straightforward, it concerns the information used for diversity seeking and discrimination. In DIAYN, one can use the full state for diversity seeking, but one can also use a prior to focus on an interesting aspect of the state. Actually, priors are often used in experiments, for instance, focusing on the x/y position rather than the full position. When `descriptor_full_state` is set to True, it uses the full state, when it is set to False, it uses the 'state descriptor' retrieved by the environment. Hence, it is required that the environment has one. (All the `_uni`, `_omni` do, same for `anttrap`, `antmaze` and `pointmaze`.) In the future, we will add an option to use a prior function directly on the full state."
]
},
{
Expand Down Expand Up @@ -299,7 +299,7 @@
"source": [
"## Prepare last utils for the training loop\n",
"\n",
"Many Reinforcement Learning algorithm have similar training process, that can be divided in a precise training step that is repeted several times. Most of the differences are captured inside the `play_step` and in the `update` functions. Hence, once those are defined, the iteration works in the same way. For this reason, instead of coding the same function for each algorithm, we have created the `do_iteration_fn` that can be used by most of them. In the training script, the user just has to partial the function to give `play_step`, `update` plus a few other parameter."
"Many Reinforcement Learning algorithm have similar training process, that can be divided in a precise training step that is repeated several times. Most of the differences are captured inside the `play_step` and in the `update` functions. Hence, once those are defined, the iteration works in the same way. For this reason, instead of coding the same function for each algorithm, we have created the `do_iteration_fn` that can be used by most of them. In the training script, the user just has to partial the function to give `play_step`, `update` plus a few other parameter."
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions examples/distributed_mapelites.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
"# Optimizing with MAP-Elites in Jax (multi-devices example)\n",
"\n",
"This notebook shows how to use QDax to find diverse and performing controllers in MDPs with [MAP-Elites](https://arxiv.org/abs/1504.04909).\n",
"It can be run locally or on Google Colab. We recommand to use a GPU. This notebook will show:\n",
"It can be run locally or on Google Colab. We recommend to use a GPU. This notebook will show:\n",
"\n",
"- how to define the problem\n",
"- how to create an emitter\n",
Expand Down Expand Up @@ -215,7 +215,7 @@
"metadata": {},
"outputs": [],
"source": [
"# Define the fonction to play a step with the policy in the environment\n",
"# Define the function to play a step with the policy in the environment\n",
"def play_step_fn(\n",
" env_state,\n",
" policy_params,\n",
Expand Down
4 changes: 2 additions & 2 deletions examples/mapelites.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
"# Optimizing with MAP-Elites in Jax\n",
"\n",
"This notebook shows how to use QDax to find diverse and performing controllers in MDPs with [MAP-Elites](https://arxiv.org/abs/1504.04909).\n",
"It can be run locally or on Google Colab. We recommand to use a GPU. This notebook will show:\n",
"It can be run locally or on Google Colab. We recommend to use a GPU. This notebook will show:\n",
"\n",
"- how to define the problem\n",
"- how to create an emitter\n",
Expand Down Expand Up @@ -172,7 +172,7 @@
"metadata": {},
"outputs": [],
"source": [
"# Define the fonction to play a step with the policy in the environment\n",
"# Define the function to play a step with the policy in the environment\n",
"def play_step_fn(\n",
" env_state,\n",
" policy_params,\n",
Expand Down
6 changes: 3 additions & 3 deletions examples/mees.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
"# Optimizing with MEES in Jax\n",
"\n",
"This notebook shows how to use QDax to find diverse and performing controllers with MAP-Elites-ES introduced in [Scaling MAP-Elites to Deep Neuroevolution](https://dl.acm.org/doi/pdf/10.1145/3377930.3390217).\n",
"It can be run locally or on Google Colab. We recommand to use a GPU. This notebook will show:\n",
"It can be run locally or on Google Colab. We recommend to use a GPU. This notebook will show:\n",
"\n",
"- how to define the problem\n",
"- how to create the MEES emitter\n",
Expand Down Expand Up @@ -189,7 +189,7 @@
},
"outputs": [],
"source": [
"# Define the fonction to play a step with the policy in the environment\n",
"# Define the function to play a step with the policy in the environment\n",
"def play_step_fn(\n",
" env_state,\n",
" policy_params,\n",
Expand Down Expand Up @@ -247,7 +247,7 @@
" behavior_descriptor_extractor=bd_extraction_fn,\n",
")\n",
"\n",
"# Prepare the scoring functions for the offspring generated folllowing\n",
"# Prepare the scoring functions for the offspring generated following\n",
"# the approximated gradient (each of them is evaluated 30 times)\n",
"sampling_fn = functools.partial(\n",
" sampling,\n",
Expand Down
2 changes: 1 addition & 1 deletion examples/mome.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
"source": [
"# Optimizing multiple objectives with MOME in Jax\n",
"\n",
"This notebook shows how to use QDax to find diverse and performing parameters on a multi-objectives Rastrigin problem, using [Multi-Objective MAP-Elites](https://arxiv.org/pdf/2202.03057.pdf) (MOME) algorithm. It can be run locally or on Google Colab. We recommand to use a GPU. This notebook will show:\n",
"This notebook shows how to use QDax to find diverse and performing parameters on a multi-objectives Rastrigin problem, using [Multi-Objective MAP-Elites](https://arxiv.org/pdf/2202.03057.pdf) (MOME) algorithm. It can be run locally or on Google Colab. We recommend to use a GPU. This notebook will show:\n",
"\n",
"- how to define the problem\n",
"- how to create an emitter instance\n",
Expand Down
Loading

0 comments on commit 69b781e

Please sign in to comment.