Skip to content

Commit

Permalink
[skip ci] Flesh out secondary purposes of repo
Browse files Browse the repository at this point in the history
Co-authored-by: sfmig <[email protected]>
  • Loading branch information
alessandrofelder and sfmig committed Dec 14, 2023
1 parent af7935f commit 5f70d38
Showing 1 changed file with 6 additions and 10 deletions.
16 changes: 6 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,18 +2,14 @@

## Main purpose of brainglobe-workflows

The main purpose of this repository is to host Python scripts and Command Line Interfaces that capture typical workflows of the BrainGlobe tools.
The main purpose of this repository is to host Python scripts and Command Line Interfaces that capture typical workflows of the BrainGlobe tools.

These workflows should be representative of the most common use-cases and are meant to be easy to reuse. They also serve as an example of how to combine several BrainGlobe tools to achieve a goal, such as whole brain cell detection and atlas registration.
These workflows typically combine several BrainGlobe tools (possibly together with other tools) to achieve a goal,
These workflows typically combine several BrainGlobe tools (possibly together with other tools) to achieve a goal,
such as whole brain cell detection and atlas registration.

## Secondary purposes of brainglobe-workflows, for developers

Secondarily, this repository further has related purposes for BrainGlobe code development itself.
* It contains benchmarks for these workflows to ensure performance is stable as the code changes
* (default) run benchmarks on CI with small data
* (optionally) specify a configuration (as a json file) to run benchmarks on locally available, typically "real", data
* these configurations are not intended to be user-facing

Note that to catch dependency updates breaking our code early, we additionally run and benchmark the workflows locally on a desktop machine with large example data.
We also use these workflows to support code development. We do this by regularly benchmarking the time they take to complete to ensure performance is stable as the code changes.
* Developers can install these benchmarks locally via `pip install [dev]`. By executing `asv run`, the benchmarks will run with default parameters on a small dataset that is downloaded from [GIN](https://gin.g-node.org/G-Node/info/wiki). See [the asv docs](https://asv.readthedocs.io/en/v0.6.1/using.html#running-benchmarks) for further details on how to run benchmarks.
* Developers can also run these benchmarks on data available locally, by specifying the relevant paths in an input configuration file (JSON file).
* We additionally run and benchmark the workflows locally on a internal desktop machine with large example datasets. These benchmarks are run periodically and the results are made publicly available.

0 comments on commit 5f70d38

Please sign in to comment.