From 5f70d3847c5228664ac2a44228ea3d3f44dcf673 Mon Sep 17 00:00:00 2001 From: alessandrofelder Date: Thu, 14 Dec 2023 15:02:25 +0100 Subject: [PATCH] [skip ci] Flesh out secondary purposes of repo Co-authored-by: sfmig <33267254+sfmig@users.noreply.github.com> --- README.md | 16 ++++++---------- 1 file changed, 6 insertions(+), 10 deletions(-) diff --git a/README.md b/README.md index b6dc752c..b6ea28b9 100644 --- a/README.md +++ b/README.md @@ -2,18 +2,14 @@ ## Main purpose of brainglobe-workflows -The main purpose of this repository is to host Python scripts and Command Line Interfaces that capture typical workflows of the BrainGlobe tools. +The main purpose of this repository is to host Python scripts and Command Line Interfaces that capture typical workflows of the BrainGlobe tools. These workflows should be representative of the most common use-cases and are meant to be easy to reuse. They also serve as an example of how to combine several BrainGlobe tools to achieve a goal, such as whole brain cell detection and atlas registration. -These workflows typically combine several BrainGlobe tools (possibly together with other tools) to achieve a goal, +These workflows typically combine several BrainGlobe tools (possibly together with other tools) to achieve a goal, such as whole brain cell detection and atlas registration. ## Secondary purposes of brainglobe-workflows, for developers - -Secondarily, this repository further has related purposes for BrainGlobe code development itself. -* It contains benchmarks for these workflows to ensure performance is stable as the code changes - * (default) run benchmarks on CI with small data - * (optionally) specify a configuration (as a json file) to run benchmarks on locally available, typically "real", data - * these configurations are not intended to be user-facing - -Note that to catch dependency updates breaking our code early, we additionally run and benchmark the workflows locally on a desktop machine with large example data. +We also use these workflows to support code development. We do this by regularly benchmarking the time they take to complete to ensure performance is stable as the code changes. +* Developers can install these benchmarks locally via `pip install [dev]`. By executing `asv run`, the benchmarks will run with default parameters on a small dataset that is downloaded from [GIN](https://gin.g-node.org/G-Node/info/wiki). See [the asv docs](https://asv.readthedocs.io/en/v0.6.1/using.html#running-benchmarks) for further details on how to run benchmarks. +* Developers can also run these benchmarks on data available locally, by specifying the relevant paths in an input configuration file (JSON file). +* We additionally run and benchmark the workflows locally on a internal desktop machine with large example datasets. These benchmarks are run periodically and the results are made publicly available.