diff --git a/docs/source/blog/version1/core_and_napari_merge.md b/docs/source/blog/version1/core_and_napari_merge.md
new file mode 100644
index 00000000..1a68c1a3
--- /dev/null
+++ b/docs/source/blog/version1/core_and_napari_merge.md
@@ -0,0 +1,44 @@
+---
+blogpost: true
+date: Jan 2, 2024
+author: Will Graham
+location: London, England
+category: BrainGlobe-v1
+language: English
+---
+
+# `cellfinder-core` and `cellfinder-napari` have merged
+
+[BrainGlobe version 1](./version_1_announcement.md) is almost ready, and the next stage of its release journey is the merging of the "backend" `cellfinder-core` and `cellfinder-napari` packages into one.
+We had previously [migrated the `cellfinder` data analysis workflow](./cellfinder_migration_live.md) into the new `brainglobe-workflows` package, as part of our efforts to separate "backend" BrainGlobe tools from common analysis pipelines.
+This means that there is no longer any need to keep the "backend" package (`cellfinder-core`) and nor the visualisation plugin (`cellfinder-napari`) stored in separate, lower-level packages.
+As such;
+
+- [`cellfinder-core`](https://github.com/brainglobe/cellfinder-core) and [`cellfinder-napari`](https://github.com/brainglobe/cellfinder-napari) will be deprecated.
+- [A _package_ called `cellfinder`](https://github.com/brainglobe/cellfinder) will become available as a replacement for this functionality. Note that this will re-use the old "cellfinder" name that the command-line-interface had, [prior to its migration](./cellfinder_migration_live.md).
+- The `cellfinder-napari` plugin is now simply called "cellfinder" internally, and when loaded up in napari.
+- The "cellfinder" name for the whole-brain registration and analysis workflow provided by [`brainglobe-workflows`](/documentation/brainglobe-workflows/index.md) will be deprecated to avoid confusion. This workflow will now be available as "`brainmapper`".
+
+From a user perspective, this is just a restructuring and reorganisation of existing functionality, and the renaming of the cellfinder command-line tool to `brainmapper`.
+If you were using the `cellfinder-core` backend, or the `cellfinder-napari` plugins, you'll need to uninstall those packages and install version `1.0.0` (or later) of the [`cellfinder` _package_](https://pypi.org/project/cellfinder/).
+By merging two packages together, we hope to reduce the complexity of a BrainGlobe install and make the API to the tools more intuitive.
+For developers, this merge again reduces the number of repositories that we have to maintain.
+It also ensures we continue to distinguish between BrainGlobe tools and workflows.
+And finally, we also address a long-standing historical naming issue with the old "cellfinder" (now `brainmapper`) workflow being confused with the corresponding backend packages.
+
+## What do I need to do?
+
+If you were previously using `cellfinder-core` in your scripts, you will need to uninstall it and install `cellfinder` version `1.0.0` or greater.
+Once you've done this, it should just be a case of changing each of your `from cellfinder_core import X` statements to `from cellfinder.core import X`.
+
+If you were previously using `cellfinder-napari`, you'll need to uninstall it and install `cellfinder[napari]`, which will fetch `cellfinder` and its optional napari dependency.
+Any references you have made to the `cellfinder_napari` plugin in your analysis will need to change to the "`cellfinder` plugin" instead.
+
+If you were using the cellfinder command-line tool that was provided by `brainglobe-workflows`, you will need to update your version of `brainglobe-workflows`.
+If you were still using the cellfinder command-line tool provided by the `cellfinder` _package_, with version less than `1.0.0`, you will need to take slightly more involved action - we recommand you look at the instructions on the [full changelog](#full-changelog) for details.
+
+You can take a look at the instructions in the [full changelog](#full-changelog) for more details about updating to the new package.
+
+## Full changelog
+
+You can find the [full changelog on the releases page](../../community/releases/v1/cellfinder-core-and-plugin-merge.md).
diff --git a/docs/source/community/developers/repositories/cellfinder-core/index.md b/docs/source/community/developers/repositories/cellfinder-core/index.md
index ad702f01..3d946c03 100644
--- a/docs/source/community/developers/repositories/cellfinder-core/index.md
+++ b/docs/source/community/developers/repositories/cellfinder-core/index.md
@@ -1,4 +1,4 @@
-# cellfinder-core
+# cellfinder.core
## Cell detection
@@ -12,12 +12,12 @@ Cell detection in cellfinder has three stages:
#### 2D filtering
-Code can be found in `cellfinder_core/detect/filters/plane`.
+Code can be found in `cellfinder/core/detect/filters/plane`.
Each plane of data is filtered independently, and in parallel across a number of processes.
This part of processing performs two tasks:
-1. Applies a filter to enhance peaks in the data (``cellfinder_core/detect/filters/plane/classical_filter.py``).
+1. Applies a filter to enhance peaks in the data (``cellfinder/core/detect/filters/plane/classical_filter.py``).
This consists of (in order)
1. a median filter (`scipy.signal.medfilt2d`)
2. a gaussian filter (`scipy.ndimage.gaussian_filter`)
@@ -38,7 +38,7 @@ Memory usage during 2D filtering, for each plane, is the following:
#### 3D filtering
-Code can be found in `cellfinder_core/detect/filters/volume/ball_filter.py`.
+Code can be found in `cellfinder/core/detect/filters/volume/ball_filter.py`.
Both this step and the structure detection step take place in the main `Python` process, with no parallelism. As the planes are processed in the 2D filtering step, they are passed to this step. When `ball_z_size` planes have been handed over, 3D filtering begins.
The 3D filter stores a 3D array that has depth `ball_z_size`, and contains `ball_z_size` number of planes. This is a small 3D slice of the original data. A spherical kernel runs across the x, y dimensions, and where enough intensity overlaps with the spherical kernel the voxel at the centre of the kernel is marked as being part of a cell. The output of this step is the central plane of the array, with marked cells.
@@ -50,7 +50,7 @@ Memory usage information during 3D filtering:
#### Structure detection
-Code can be found in `cellfinder_core/detect/filters/volume/structure_detection.py`.
+Code can be found in `cellfinder/core/detect/filters/volume/structure_detection.py`.
This step takes the planes output from 3D filtering with marked cell voxels, and detects collections of voxels that are adjacent.
Memory usage information during structure detection:
diff --git a/docs/source/community/releases/v1/cellfinder-core-and-plugin-merge.md b/docs/source/community/releases/v1/cellfinder-core-and-plugin-merge.md
new file mode 100644
index 00000000..49a70158
--- /dev/null
+++ b/docs/source/community/releases/v1/cellfinder-core-and-plugin-merge.md
@@ -0,0 +1,82 @@
+# Version 1: changes to the cellfinder backend and plugin
+
+The `cellfinder-core` package and `cellfinder-napari` plugin have now been migrated to a [package called cellfinder](https://github.com/brainglobe/cellfinder).
+Please note that this is enacting the [future warning from the earlier `cellfinder` (CLI) migration](./cellfinder-migration.md#future-warning).
+If you wish to continue using both the `cellfinder` command-line interface *and* the backend Python API, you will need to install the latest version of `brainglobe-workflows`, which will automatically fetch the new version of the `cellfinder` package.
+See the [updating section](#updating) for more details.
+
+## `cellfinder-core`
+
+If you were previously using the Python API to `cellfinder-core`, you will need to uninstall `cellfinder-core` from your environment and install `cellfinder` version `1.0.0` or later.
+
+The internal package structure has not changed in this move, but it is now a submodule of the new `cellfinder` package.
+So if you were using the Python API in your scripts, you will need to change any `from cellfinder_core import X` statements to `from cellfinder.core import X`.
+Once you make this change, everything should work as it was before.
+
+## `cellfinder-napari`
+
+If you were previously using the napari plugin for visualising output data, you will need to uninstall `cellfinder-napari` and install `cellfinder[napari]`, making sure to fetch the optional napari dependency.
+
+The plugin itself has not undergone any interface changes, but is now just called "cellfinder" rather than "cellfinder-napari" when viewed from the napari widget panel.
+
+## `brainglobe-workflows`
+
+Now ships with the new version of `cellfinder`, containing `cellfinder.core` and `cellfinder.napari` submodules.
+`brainglobe-workflows` is now the *only* package that provides the old "cellfinder" command-line interface, or workflow.
+This workflow is now _only_ available under the name `brainmapper` - the name `cellfinder` is reserved for the backend package that contains the merged `cellfinder-core` and `cellfinder-napari` packages mentioned above.
+
+## Updating
+
+The update steps that you will need to perform vary depending on how you currently use cellfinder (CLI) and whether you upgraded to `brainglobe-workflows` previously.
+Regardless, before you begin the process of updating we recommend you uninstall `cellfinder-core` and `cellfinder-napari` from your environment
+
+```bash
+pip uninstall cellfinder-core cellfinder-napari
+```
+
+Alternatively; you can create a new, clean environment to install into.
+
+### I previously updated to `brainglobe-workflows`
+
+If you previously updated from the old cellfinder package to `brainglobe-workflows`, then you should be able to simply update to the latest version of `brainglobe-workflows`, with
+
+```bash
+pip install --upgrade brainglobe-workflows
+```
+
+This will fetch the new `cellfinder` package containing the `cellfinder.core` and `cellfinder.napari` submodules.
+You can then continue to use the CLI to run the workflow as you were doing before, however note that the workflow will now only be available as "`brainmapper`".
+
+### I have not updated to `brainglobe-workflows`, but I don't use the cellfinder CLI
+
+If you don't use the `cellfinder` CLI, then you can just upgrade your version of the `cellfinder` package to version `1.0.0` or later:
+
+```bash
+pip install --upgrade cellfinder
+```
+
+If you do want to make use of the old "cellfinder" workflow, you will need to install `brainglobe-workflows` and use it under its new name, `brainmapper`.
+
+### I have not updated to `brainglobe-workflows`, and I use the cellfinder CLI
+
+In this case, we strongly recommend you either create a new Python environment to install into (and delete your old one), or remove `cellfinder`, `cellfinder-core`, and `cellfinder-napari` from your current environment with
+
+```bash
+pip uninstall cellfinder cellfinder-core cellfinder-napari
+```
+
+You may also wish to use `pip` to uninstall `tensorflow` to make sure that the new install does not encounter potential version conflicts.
+
+Once you have cleaned your environment, you will need to install the latest version of `brainglobe-workflows`:
+
+```bash
+pip install brainglobe-workflows
+```
+
+You will now have access to the `brainmapper` workflow from within your environment.
+This is the same as the old "cellfinder" workflow that you were using previously - but now it is [supplied by the `brainglobe-workflows` package](/blog/version1/cellfinder_migration_live.md).
+You will also have the latest version of the `cellfinder` package (`1.0.0` or later) installed;
+
+- `cellfinder-core` is included as a submodule, `cellfinder.core`.
+- `cellfinder-napari` is included as a submodule, and the plugin has been renamed to just "cellfinder" when viewed in napari.
+- To access the old "cellfinder" command-line tool or workflow, you now need to call `brainmapper`. The interface has not changed, just the name.
diff --git a/docs/source/community/releases/v1/index.md b/docs/source/community/releases/v1/index.md
index 31bffea1..41460bb0 100644
--- a/docs/source/community/releases/v1/index.md
+++ b/docs/source/community/releases/v1/index.md
@@ -10,8 +10,9 @@ You can follow the links provided for more information; including a listing of r
brainreg and brainreg-napari have been merged into a single package | [Further info](brainreg.md#brainreg-and-brainreg-napari) |
brainreg-segment has been renamed to brainglobe-segmentation | [Further info](brainreg.md#brainreg-segment) |
The `cellfinder` command-line-interface has been moved into `brainglobe-workflows` | [Further info](cellfinder-migration.md) |
-The cellfinder package is deprecated - it will later be recycled to merge some backend functionality | [Further info](cellfinder-migration.md#cellfinder-repository)
-The cellfinder Docker image is discontinued | [Further info](cellfinder-migration.md#cellfinder-docker-image)
+The cellfinder package is deprecated - it will later be recycled to merge some backend functionality | [Further info](cellfinder-migration.md#cellfinder-repository) |
+The cellfinder Docker image is discontinued | [Further info](cellfinder-migration.md#cellfinder-docker-image) |
+cellfinder-core and cellfinder-napari merged into "new cellfinder" | [Further info](cellfinder-core-and-plugin-merge.md) |
## Complete index
@@ -21,4 +22,5 @@ The cellfinder Docker image is discontinued | [Further info](cellfinder-migratio
brainreg
cellfinder-migration
+cellfinder-core-and-plugin-merge
```
diff --git a/docs/source/community/roadmaps/index.md b/docs/source/community/roadmaps/index.md
index 2468ffd7..aa654ec3 100644
--- a/docs/source/community/roadmaps/index.md
+++ b/docs/source/community/roadmaps/index.md
@@ -1,9 +1,8 @@
# Roadmaps
-BrainGlobe roadmaps give you an idea of where the project is heading over the next few months and years.
-These roadmaps are usually at a high level for the BrainGlobe project as a whole.
-Plans for individual tools are usually discussed on their respective repos, and
-over at [Zulip](https://brainglobe.zulipchat.com/).
+BrainGlobe roadmaps give you an idea of where the project is heading over the next few months and years.
+These roadmaps are usually at a high level for the BrainGlobe project as a whole.
+Plans for individual tools are usually discussed on their respective repos, and over at [Zulip](https://brainglobe.zulipchat.com/).
BrainGlobe is a community project, and as such we welcome your feedback. Please [get in touch with us](/contact) if you have
any questions or suggestions. These roadmaps are living documents and will likely be revisited regularly.
@@ -12,4 +11,4 @@ any questions or suggestions. These roadmaps are living documents and will likel
:maxdepth: 1
november-2023
-```
\ No newline at end of file
+```
diff --git a/docs/source/documentation/cellfinder/user-guide/command-line/candidate-detection.md b/docs/source/documentation/brainglobe-workflows/brainmapper/candidate-detection.md
similarity index 99%
rename from docs/source/documentation/cellfinder/user-guide/command-line/candidate-detection.md
rename to docs/source/documentation/brainglobe-workflows/brainmapper/candidate-detection.md
index 14cf009e..419712ce 100644
--- a/docs/source/documentation/cellfinder/user-guide/command-line/candidate-detection.md
+++ b/docs/source/documentation/brainglobe-workflows/brainmapper/candidate-detection.md
@@ -15,4 +15,3 @@ considered a nucleus pixel. **Default: 0.6**
Given as a fraction of the soma-diameter. **Default: 0.2**
* `--threshold` The cell threshold, in multiples of the standard deviation above the mean. **Default: 10**
* `--soma-spread-factor` Soma size spread factor \(for splitting up cell clusters\). **Default: 1.4**
-
diff --git a/docs/source/documentation/cellfinder/user-guide/command-line/classification.md b/docs/source/documentation/brainglobe-workflows/brainmapper/classification.md
similarity index 99%
rename from docs/source/documentation/cellfinder/user-guide/command-line/classification.md
rename to docs/source/documentation/brainglobe-workflows/brainmapper/classification.md
index ee77a697..d130f198 100644
--- a/docs/source/documentation/cellfinder/user-guide/command-line/classification.md
+++ b/docs/source/documentation/brainglobe-workflows/brainmapper/classification.md
@@ -21,4 +21,3 @@ trained on. Set this to adjust the pixel sizes of the extracted cubes. **Defaul
* `--cube-depth` The depth \(z\)\) of the cubes to extract in pixels\(must be even\). **Default 20**
* `--save-empty-cubes` If a cube cannot be extracted \(e.g. to close to the edge of the image\), save an empty cube
instead. Useful to keep track of all cell candidates.
-
diff --git a/docs/source/documentation/brainglobe-workflows/brainmapper/cli.md b/docs/source/documentation/brainglobe-workflows/brainmapper/cli.md
new file mode 100644
index 00000000..896ca949
--- /dev/null
+++ b/docs/source/documentation/brainglobe-workflows/brainmapper/cli.md
@@ -0,0 +1,82 @@
+# Command line interface
+
+## Basic usage
+
+To run `brainmapper`, use this general syntax
+
+```bash
+ brainmapper -s signal_channel_images optional_signal_channel_images -b background_channel_images -o /path/to/output_directory -v 5 2 2 --orientation asl
+```
+
+:::{hint}
+All options can be found by running `brainmapper -h`
+:::
+
+## Arguments
+
+### Mandatory
+
+- `-s` or `--signal-planes-paths` Path to the directory of the signal files. Can also be a text file pointing to the files.
+- **There can be as many signal channels as you like, and each will be treated independently**.
+- `-b` or `--background-planes-path` Path to the directory of the background files. Can also be a text file pointing to the files.
+- **This background channel will be used for all signal channels**
+- `-o` or `--output-dir` Output directory for all intermediate and final results
+
+:::{caution}
+You must also specify the orientation and voxel size of your data, see [Image definition](/documentation/setting-up/image-definition).
+:::
+
+### Optional Arguments
+
+#### Only run parts of `brainmapper`
+
+If for some reason you don't want some parts of `brainmapper` to run, you can use the following options.
+If a part of the pipeline is required by another part it will be run (i.e. `--no-detection` won't do anything unless `--no-classification` is also used).
+`brainmapper` will attempt to work out which parts of the pipeline have already been run (in a given output directory) and not run them again if appropriate.
+
+- `--no-register` Do not run registration
+- `--no-detection` Do not run cell candidate detection
+- `--no-classification` Do not run cell classification
+- `--no-analyse` Do not analyse and export cell positions
+- `--no-figures` Do not create figures (e.g. heatmap)
+
+#### Figure options
+
+Figures cannot be customised much, but the current options are here:
+
+- `--heatmap-smoothing` Gaussian smoothing sigma, in microns.
+- `--no-mask-figs` Don't mask the figures (removing any areas outside the brain, from e.g. smoothing).
+
+#### Performance, debugging and testing
+
+- `--debug` Increase verbosity of statements printed to console and save all intermediate files.
+- `--n-free-cpus` The number of CPU cores on the machine to leave unused by the program to spare resources.
+- `--max-ram` Maximum amount of RAM to use (in GB) — **not currently fully implemented for all parts of `brainmapper`**
+
+Useful for testing or if you know your cells are only in a specific region;
+
+- `--start-plane` The first plane to process in the Z dimension
+- `--end-plane` The last plane to process in the Z dimension
+
+#### Standard space options
+
+- `--transform-all` Transform all cell positions (including artifacts).
+
+## Additional options
+
+```{toctree}
+:maxdepth: 1
+candidate-detection
+classification
+/documentation/brainreg/user-guide/parameters
+```
+
+:::{hint}
+If you're using `brainmapper` at the [Sainsbury Wellcome Centre](https://www.sainsburywellcome.org/web/), you may wish to see the [instructions for using brainmapper on the SWC HPC system](using-brainmapper-at-the-swc).
+:::
+
+```{toctree}
+:maxdepth: 1
+:hidden:
+using-brainmapper-at-the-swc
+```
diff --git a/docs/source/documentation/brainglobe-workflows/brainmapper/data-requirements.md b/docs/source/documentation/brainglobe-workflows/brainmapper/data-requirements.md
new file mode 100644
index 00000000..bbbb7ddc
--- /dev/null
+++ b/docs/source/documentation/brainglobe-workflows/brainmapper/data-requirements.md
@@ -0,0 +1,36 @@
+# Data requirements
+
+What kind of data does `brainmapper` support?
+
+## Introduction
+
+`brianmapper` was written to analyse certain kinds of whole brain microscopy datasets (e.g. serial two-photon or lightsheet in cleared tissue).
+Although we are working on supporting different kinds of data, currently, the data must fit these criteria:
+
+### Image channels
+
+For registration, you only need a single channel, but this is ideally a "background" channel, i.e., one with only autofluorescence, and no other strong signal. Typically, we acquire the "signal" channels with red or green filters, and then the "background" channel with blue filters.
+
+For cell detection, you will need two channels, the "signal" channel, and the "background" channel.
+The signal channel should contain brightly labelled cells (e.g. from staining or viral injections).
+The models supplied with `brainmapper` were trained on whole-cell labels, so if you have e.g. a nuclear marker, they will need to be retrained (see [Training the network](training/index).
+However, realistically, the network will need to be retrained for every new application.
+
+### Image structure
+
+Although we hope to support more varied types of data soon, your images must currently:
+
+- Cover the entire brain
+- Be of sufficiently high resolution that cells appear in multiple planes (i.e. 10μm axial spacing)
+- Contain planes that are registered to each other (i.e., this is often not the case with slide scanners or manual acquisition)
+
+### Organisation
+
+`brainmapper` expects that your data will be stored as a series of 2D tiff files.
+These can be in a single directory, or you can generate a text file that points to them.
+Different channels in your dataset must be in different directories or text files.
+
+:::{caution}
+Please ensure that none of the files or folders that you pass to `brainmapper` have a space in them.
+This should be fixed in the future, but for now, please use `/the_path/to/my_data` rather than `/the path/to/my data`
+:::
diff --git a/docs/source/documentation/cellfinder/images/load_data.gif b/docs/source/documentation/brainglobe-workflows/brainmapper/images/load_data.gif
similarity index 100%
rename from docs/source/documentation/cellfinder/images/load_data.gif
rename to docs/source/documentation/brainglobe-workflows/brainmapper/images/load_data.gif
diff --git a/docs/source/documentation/cellfinder/images/load_results.gif b/docs/source/documentation/brainglobe-workflows/brainmapper/images/load_results.gif
similarity index 100%
rename from docs/source/documentation/cellfinder/images/load_results.gif
rename to docs/source/documentation/brainglobe-workflows/brainmapper/images/load_results.gif
diff --git a/docs/source/documentation/brainglobe-workflows/brainmapper/index.md b/docs/source/documentation/brainglobe-workflows/brainmapper/index.md
new file mode 100644
index 00000000..89f91f3d
--- /dev/null
+++ b/docs/source/documentation/brainglobe-workflows/brainmapper/index.md
@@ -0,0 +1,37 @@
+# `brainmapper` command line tool
+
+`brainmapper` can:
+
+- Detect labelled cells in 3D in whole-brain images (many hundreds of GB),
+- Register the image to an atlas (such as the [Allen Mouse Brain Atlas](https://atlas.brain-map.org/atlas?atlas=602630314)),
+- Segment the brain based on the reference atlas,
+- Calculate the volume of each brain area, and the number of labelled cells within it,
+- Transform everything into standard space for analysis and visualisation.
+
+## User Guide
+
+```{toctree}
+:maxdepth: 1
+data-requirements
+cli
+visualisation
+output-files
+training/index
+```
+
+## Tutorials
+
+```{toctree}
+:maxdepth: 1
+/tutorials/brainmapper/index
+```
+
+## Troubleshooting
+
+Since `brainmapper` uses `cellfinder`, you may encounter issues when using the command-line tool that are [documented on the `cellfinder` page](../../cellfinder/troubleshooting/index.md).
+[Head there](../../cellfinder/troubleshooting/index.md) for more information on some common issues and debugging tips.
+
+## Notes
+
+- As of version `1.0.0` of `brainglobe-workflows`, the Docker image for `brainmapper` has been discontinued.
+- Prior to the release of `cellfinder` `v1.0.0`, this workflow and command-line tool was called "cellfinder".
diff --git a/docs/source/documentation/brainglobe-workflows/brainmapper/output-files.md b/docs/source/documentation/brainglobe-workflows/brainmapper/output-files.md
new file mode 100644
index 00000000..43d37d32
--- /dev/null
+++ b/docs/source/documentation/brainglobe-workflows/brainmapper/output-files.md
@@ -0,0 +1,32 @@
+# Output files
+
+When you run `brainmapper`, depending on the options chosen, a number of files will be created.
+These may be useful for custom analyses (i.e. analysis not currently performed by `brainmapper` itself).
+The file descriptions are ordered by the subdirectory that they are found in, within the main `brainmapper` output directory.
+
+## Analysis
+
+- `summary.csv` - This file lists, for each brain area, the number of cells detected, the volume of the brain area, and the density of cells (in cells per cubic millimetre).
+- `all_points.csv` - This file lists every detected cell, and it's coordinate in both the raw data and atlas spaces, along with the brain structure it is found in.
+
+## Figures
+
+- `heatmap.tiff` - This is a heatmap (in the coordinate space of the downsampled, reoriented data) representing cell densities across the brain.
+
+## Points
+
+- `cells.xml` - Cell candidate positions in the coordinate space of the raw data
+- `cell_classification.xml` - Same as `cells.xml`, but after classification (i.e., each cell candidate has a cell/no_cell label)
+- `downsampled.points` - Detected cell coordinates, in the coordinate space of the raw data, but downsampled and reoriented to match the atlas (but not yet warped to the atlas). This can be loaded with `pandas.read_hdf()`
+- `atlas.points` - As above, but warped to the atlas. This can be loaded with `pandas.read_hdf()`
+- `points.npy` - Cell coordinates, transformed into atlas space, for visualisation using [brainrender](https://github.com/brainglobe/brainrender)
+- `abc4d.npy` - Exported file for use with [abc4d](https://github.com/valeriabonapersona/abc4d)
+
+## Registration
+
+The registration directory is simply a `brainreg` output directory.
+To understand these files, please see the [brainreg output files](/documentation/brainreg/user-guide/output-files/) page.
+
+Two other files are also saved, `cellfinder.json` and `cellfinder_DATE_TIME.log`.
+These files contain information about how cellfinder was run, and are useful for troubleshooting and debugging.
+If you ask for help (e.g. on the [image.sc. forum](https://forum.image.sc/tag/brainglobe)), we may ask you to send the log file.
diff --git a/docs/source/documentation/cellfinder/user-guide/command-line/training/index.md b/docs/source/documentation/brainglobe-workflows/brainmapper/training/index.md
similarity index 100%
rename from docs/source/documentation/cellfinder/user-guide/command-line/training/index.md
rename to docs/source/documentation/brainglobe-workflows/brainmapper/training/index.md
diff --git a/docs/source/documentation/cellfinder/user-guide/command-line/training/using-supplied-data.md b/docs/source/documentation/brainglobe-workflows/brainmapper/training/using-supplied-data.md
similarity index 100%
rename from docs/source/documentation/cellfinder/user-guide/command-line/training/using-supplied-data.md
rename to docs/source/documentation/brainglobe-workflows/brainmapper/training/using-supplied-data.md
diff --git a/docs/source/documentation/brainglobe-workflows/brainmapper/using-brainmapper-at-the-swc.md b/docs/source/documentation/brainglobe-workflows/brainmapper/using-brainmapper-at-the-swc.md
new file mode 100644
index 00000000..3ce8485c
--- /dev/null
+++ b/docs/source/documentation/brainglobe-workflows/brainmapper/using-brainmapper-at-the-swc.md
@@ -0,0 +1,64 @@
+# Using `brainmapper` at the SWC
+
+N.B. Before starting to use `brainmapper` on the SWC, you should familiarise yourself with the job scheduler system ([SLURM](https://slurm.schedmd.com/documentation.html)).
+
+:::{hint}
+This information refers to using `brainmapper` for cell detection and registration, but any of the BrainGlobe command-line tools (e.g. training `brainmapper`, or running only registration with brainreg) can be used similarly.
+:::
+
+## Interactive use
+
+On the SWC cluster, no software needs to be installed, as `brainmapper` can be loaded with `module load brainglobe`.
+
+`brainmapper` can be used interactively, by starting an interactive job:
+
+```bash
+srun -p gpu --gres=gpu:1 -n 20 -t 0-24:00 --pty --mem=120G bash -i
+```
+
+Loading `brainmapper`:
+
+```bash
+module load brainglobe
+```
+
+And then running `brainmapper` as per the [User guide](index).
+
+## Batch processing
+
+It is recommended to use `brainmapper` by using the batch submission system.
+This has many advantages:
+
+- Your analysis is reproducible: you have a script showing exactly what you did.
+- You don't need to wait for computing resources to become available: once submitted, the job will wait until it can be run.
+- If for any reason the analysis is interrupted, you can easily restart.
+- You don't need to keep a connection to the cluster open.
+- You can easily receive email updates when the job starts and finishes.
+
+An example batch script is given below, but it is recommended to familiarise yourself with the batch submission system before trying to optimise `brainmapper`.
+
+```bash
+#!/bin/bash
+
+#SBATCH -p gpu # partition (queue)
+#SBATCH -N 1 # number of nodes
+#SBATCH --mem 120G # memory pool for all cores
+#SBATCH --gres=gpu:1
+#SBATCH -n 10
+#SBATCH -t 1-0:0 # time (D-HH:MM)
+#SBATCH -o brainmapper.out
+#SBATCH -e brainmapper.err
+#SBATCH --mail-type=ALL
+#SBATCH --mail-user=youremail@domain.com
+
+cell_file='/path/to/signal/channel'
+background_file='path/to/background/channel'
+output_dir='/path/to/output/directory'
+
+echo "Loading brainglobe environment"
+module load brainglobe
+
+echo "Running brainmapper"
+# Just an example. See the user guide for the specific parameters
+brainmapper -s $cell_file -b $background_file -o $output_dir -v 5 2 2 --orientation psl
+```
diff --git a/docs/source/documentation/brainglobe-workflows/brainmapper/visualisation.md b/docs/source/documentation/brainglobe-workflows/brainmapper/visualisation.md
new file mode 100644
index 00000000..d6eec473
--- /dev/null
+++ b/docs/source/documentation/brainglobe-workflows/brainmapper/visualisation.md
@@ -0,0 +1,24 @@
+# Visualisation
+
+The `cellfinder` package that the `brainmapper` workflow uses comes with a plugin for `napari` to allow you to easily view results.
+
+## Getting started
+
+To view your data, firstly, open napari. The easiest way to do this is open a terminal (making sure your `brainmapper` conda environment is activated), then just type `napari`.
+A napari window should open.
+
+## Visualising your raw data
+
+Assuming that your raw data is stored as `.tiff` files, drag these into napari (onto the main window in the middle).
+This should be whatever you passed to `brainmapper` originally, i.e., a single multipage tiff, or a directory of 2D tiffs.
+You can load as many channels as you like (e.g., the signal and the background channel).
+
+![Loading raw data into napari](/documentation/brainglobe-workflows/brainmapper/images/load_data.gif)
+
+## Visualising your results
+
+You can then drag and drop the `brainmapper` output directory (the one you specified with the `-o` flag) into the napari window.
+The plugin will then load your detected cells (in yellow) and the rejected cell candidates (in blue).
+If you carried out registration, then these results will be overlaid (similarly to the [brainreg plugin](/documentation/brainreg/user-guide/visualisation), but transformed to the coordinate space of your raw data).
+
+![Visualising `brainmapper` results](/documentation/brainglobe-workflows/brainmapper/images/load_results.gif)
diff --git a/docs/source/documentation/brainglobe-workflows/index.md b/docs/source/documentation/brainglobe-workflows/index.md
index 80813d46..dedf0387 100644
--- a/docs/source/documentation/brainglobe-workflows/index.md
+++ b/docs/source/documentation/brainglobe-workflows/index.md
@@ -3,7 +3,14 @@
brainglobe-workflows is a collection of common data analysis pipelines that utilise a combination of BrainGlobe tools.
It currently provides:
-- `cellfinder`: Whole-brain cell detection and classification. [Read more about the command line interface here](/documentation/cellfinder/user-guide/command-line/index.md).
+- `brainmapper`: Whole-brain cell detection and classification. [Read more about the command line interface here](brainmapper/index.md). This workflow was [previously called `cellfinder`](/blog/version1/core_and_napari_merge.md).
+
+You can find more information on each of these tools by visiting the links below:
+
+```{toctree}
+:maxdepth: 1
+brainmapper/index
+```
## Installation
@@ -15,7 +22,17 @@ pip install brainglobe-workflows
Doing so will make all of the command-line tools that `brainglobe-workflows` provides visible whilst working inside your environment.
-## Old `cellfinder` installations
+## Installing with `cellfinder` versions older than `v1.0.0`
+
+The `cellfinder` package, command-line tool, and workflow have undergone significant changes in the move to version 1.
+You can find a case-by-case breakdown of what you will need to do if you want to upgrade/install `brainglobe-workflows` whilst retaining the functionality of old `cellfinder` installs (and the command-line tool) [in the corresponding changelog](/community/releases/v1/cellfinder-migration.md).
+
+However, the simplest option is to create a fresh Python environment and
+
+```bash
+pip install brainglobe-workflows
+```
-If you have a version of the `cellfinder` package that is older than `v1.0.0`, we recommend that you uninstall your version of `cellfinder` and replace the command-line tool with the version provided by `brainglobe-workflows`.
-See the [blog post](/blog/version1/cellfinder_migration_live.md) for more information.
+into it.
+This will fetch the latest version of `brainglobe-workflows`, providing you with the `brainmapper` command-line tool / workflow which is functionally equivalent to the old "cellfinder" command-line tool.
+It will also provide you with the updated `cellfinder` package (at least `v1.0.0`) whose API and package structure matches that described in the documentation.
diff --git a/docs/source/documentation/brainrender/installation.md b/docs/source/documentation/brainrender/installation.md
index 0cac2e1a..ae7205c7 100644
--- a/docs/source/documentation/brainrender/installation.md
+++ b/docs/source/documentation/brainrender/installation.md
@@ -26,9 +26,6 @@ others. You can safely ignore these.
Note that some of `brainrender`'s more rarely used features may require the installation of additional packages. In particular, for exporting videos you will need to install [`ffmpeg`](https://ffmpeg.org/download.html).
:::
-Once you have installed brainrender, dive right in with this
-[getting started example notebook!](https://github.com/brainglobe/brainrender/blob/master/getting_started.ipynb)
-
## Advanced installation
If you want the most recent version of `brainrender`'s code, perhaps to help developing it, you can either
diff --git a/docs/source/documentation/cellfinder/index.md b/docs/source/documentation/cellfinder/index.md
index d1013f78..01e3e910 100644
--- a/docs/source/documentation/cellfinder/index.md
+++ b/docs/source/documentation/cellfinder/index.md
@@ -1,77 +1,68 @@
# cellfinder
-cellfinder is software for automated 3D cell detection in very large 3D images (e.g., serial two-photon or lightsheet
-volumes of whole mouse brains).
+cellfinder is software for automated 3D cell detection in very large 3D images (e.g., serial two-photon or lightsheet volumes of whole mouse brains).
![Detected labelled cells, overlaid on a segmented coronal brain section](images/cells.png)
**Detected labelled cells, overlaid on a segmented coronal brain section**
## Ways to use cellfinder
-cellfinder exists as three separate software packages with different user interfaces and different aims.
-:::{hint}
-If you don't know how to start, we recommend the cellfinder napari plugin.
-:::
+cellfinder can be used in three ways, each with different user interfaces and different aims.
+### cellfinder.core
-### cellfinder-core
-`cellfinder-core` is a Python package implementing the core algorithm for efficient cell detection in large images.
+`cellfinder.core` is a Python submodule implementing the core algorithm for efficient cell detection in large images.
-The package exists on its own to allow developers to implement the algorithm in their own software. For now, the only
-API documentation is in the [GitHub README](https://github.com/brainglobe/cellfinder-core/blob/main/README.md), please
-see the documentation for the napari plugin [here](user-guide/napari-plugin/index) for an explanation of the parameters.
+The submodule exists to allow developers to implement the algorithm in their own software.
+For now, the only API documentation is in the [GitHub README](https://github.com/brainglobe/cellfinder/blob/main/README.md), please see the documentation for the napari plugin [here](user-guide/napari-plugin/index) for an explanation of the parameters.
Alternatively, please [get in touch](/contact).
### cellfinder napari plugin
-This is a thin wrapper around the `cellfinder-core` package and aims to:
-
-* Provide the cell detection algorithm in a user-friendly form
-* Allow the cell detection algorithm to be chained together with other tools in the Napari ecosystem
-* Allow easier parameter optimisation for users of the other cellfinder tools.
+This is a thin wrapper around the `cellfinder.core` submodule and aims to:
+- Provide the cell detection algorithm in a user-friendly form
+- Allow the cell detection algorithm to be chained together with other tools in the napari ecosystem
+- Allow easier parameter optimisation for users of the other cellfinder tools.
![Visualising detected cells in the cellfinder napari plugin](images/napari-cellfinder.gif)
**Visualising detected cells in the cellfinder napari plugin**
-### cellfinder command-line tool
-A command-line tool (confusingly just called `cellfinder`) exists to combine the `cellfinder-core` cell detection
-algorithm and [brainreg](/documentation/brainreg/index). `cellfinder` can:
-
-* Detect labelled cells in 3D in whole-brain images (many hundreds of GB)
-* Register the image to an atlas (such as the [Allen Mouse Brain Atlas](https://atlas.brain-map.org/atlas?atlas=602630314))
-* Segment the brain based on the reference atlas
-* Calculate the volume of each brain area, and the number of labelled cells within it
-* Transform everything into standard space for analysis and visualisation
+### `brainmapper` command-line tool
+The `brainmapper` command-line tool exists to combine the `cellfinder.core` cell detection algorithm and [brainreg](/documentation/brainreg/index).
+See the [documentation for `brainglobe-workflows`](/documentation/brainglobe-workflows/index) for more information.
## Installation
+
```{toctree}
:maxdepth: 1
installation
```
## User guide
+
```{toctree}
:maxdepth: 1
-user-guide/command-line/index
user-guide/napari-plugin/index
user-guide/cellfinder-core
user-guide/training-strategy
-troubleshooting/index
```
-## Tutorials
+## Troubleshooting
+
```{toctree}
:maxdepth: 1
-/tutorials/cellfinder-cli/index
+troubleshooting/index
+troubleshooting/error-messages
+troubleshooting/speed-up
```
-
## Citing cellfinder
-If you find cellfinder useful, and use it in your research, please cite the paper outlining the cell detection algorithm:
+If you find `cellfinder` useful, and use it in your research, please cite the paper outlining the cell detection algorithm:
> Tyson, A. L., Rousseau, C. V., Niedworok, C. J., Keshavarzi, S., Tsitoura, C., Cossell, L., Strom, M. and Margrie, T. W. (2021) “A deep learning algorithm for 3D cell detection in whole mouse brain image datasets’ PLOS Computational Biology, 17(5), e1009074
[https://doi.org/10.1371/journal.pcbi.1009074](https://doi.org/10.1371/journal.pcbi.1009074)
>
+
If you use any of the image registration functions in cellfinder, please also cite [brainreg](/documentation/brainreg/index).
diff --git a/docs/source/documentation/cellfinder/installation.md b/docs/source/documentation/cellfinder/installation.md
index f41c01ad..94a735df 100644
--- a/docs/source/documentation/cellfinder/installation.md
+++ b/docs/source/documentation/cellfinder/installation.md
@@ -1,83 +1,34 @@
# Requirements
-cellfinder should run on most machines, but for routine use on large datasets, you will need a fairly high-powered
-computer (see the guide to [Speeding up cellfinder](/documentation/cellfinder/troubleshooting/speed-up) for details).
+`cellfinder` should run on most machines, but for routine use on large datasets, you will need a fairly high-powered computer (see the guide to [Speeding up cellfinder](/documentation/cellfinder/troubleshooting/speed-up) for details).
-Using an NVIDIA GPU will speed up cell classification considerably. See [setting up your GPU](/documentation/setting-up/gpu)
-for details.
+Using an NVIDIA GPU will speed up cell classification considerably.
+See [setting up your GPU](/documentation/setting-up/gpu) for details.
-cellfinder uses brainreg for atlas registration, and the hardware requirements for brainreg depend on the atlas
-(and in particular, the resolution) you want to use.
-Most machines (including laptops) will be able to use most of the atlases, but some atlases
-(such as the 10μm mouse atlases) may need up to 50GB of RAM.
+`cellfinder` uses `brainreg` for atlas registration, and the hardware requirements for `brainreg` depend on the atlas (and in particular, the resolution) you want to use.
+Most machines (including laptops) will be able to use most of the atlases, but some atlases (such as the 10μm mouse atlases) may need up to 50GB of RAM.
-# Installation
-:::{admonition} Installing the napari plugin
-:class: dropdown
-* [Make sure you have napari installed](https://napari.org/stable/tutorials/fundamentals/installation.html)
-* Install the cellfinder-napari plugin from within napari
-* (`Plugins` -> `Install/Uninstall Package(s)`, choosing `cellfinder-napari`).)
-:::
+## Installation
-:::{admonition} Installing the cellfinder-core Python package
-:class: dropdown
+To use cellfinder, you will need to have Python on your machine.
+Your machine may already have Python installed, but we recommend installing miniconda.
+See [Using conda](/documentation/setting-up/conda) for details.
+### Installing the `cellfinder` package
-## Installing Python
-Your machine may already have Python
-installed, but we recommend installing miniconda. See [Using conda](/documentation/setting-up/conda) for details.
-
-## Installing cellfinder
-```{hint}
-Remember to activate your conda environment before doing anything
-```
+You can use `pip` from inside your activated Python environment to install the `cellfinder` package directly.
+Make sure to install version `1.0.0` or later!
```bash
-pip install cellfinder-core
-```
-:::
-
-:::{admonition} Installing the command line tool
-:class: dropdown
-
-```{hint}
-If you know what you're doing (and [your GPU is set up](/documentation/setting-up/gpu)), just run `pip install cellfinder`
-```
-
-
-## Installing Python
-
-cellfinder is written in Python, and so needs a functional Python installation. Your machine may already have Python
-installed, but we recommend installing miniconda. See [Using conda](/documentation/setting-up/conda) for details.
-
-```{caution}
-cellfinder should run on any type of Python installation, but if you don't use conda,
-we may be limited in the support we can offer.
+pip install cellfinder>=1.0.0 # Run this if you only want the Python API (for use in scripts)
+pip install cellfinder[napari]>=1.0.0 # Run this if you want to install the API and the napari plugin
```
+### Installing BrainGlobe Atlases
-## Installing cellfinder
-```{hint}
-Remember to activate your conda environment before doing anything
-```
-
-### Using pip
-```bash
-pip install cellfinder[napari]
-```
-
-To only install the command line tool with no GUI (e.g., to run cellfinder on an HPC cluster), just run:
-
-```
-pip install cellfinder
-```
-:::
-
-:::{admonition} Installing the command line tool using docker
-:class: dropdown
+To install download BrainGlobe atlases in advance, please see the guide to [the BrainGlobe Atlas API command-line interface](/documentation/bg-atlasapi/usage/command-line-interface).
-Please see the [guide to using cellfinder with docker](user-guide/command-line/docker.md)
-:::
+### Installing `brainmapper`
-To install download BrainGlobe atlases in advance, please see the guide to
-[the BrainGlobe Atlas API command-line interface](/documentation/bg-atlasapi/usage/command-line-interface) for details.
\ No newline at end of file
+The `brainmapper` workflow combines the `cellfinder` and `brainreg` packages into a single workflow, providing full-brain registration and cell detection in one command.
+You can read more about it [on this page](../brainglobe-workflows/brainmapper/index.md).
diff --git a/docs/source/documentation/cellfinder/troubleshooting/error-messages.md b/docs/source/documentation/cellfinder/troubleshooting/error-messages.md
index 58ed8237..161bba3e 100644
--- a/docs/source/documentation/cellfinder/troubleshooting/error-messages.md
+++ b/docs/source/documentation/cellfinder/troubleshooting/error-messages.md
@@ -1,26 +1,27 @@
# Debugging common error messages
## Error messages
+
### OSError: [Errno 24] Too many open files
```bash
OSError: [Errno 24] Too many open files
```
-This is likely because your default limits are set too low. To fix this, follow the instructions
-[here](https://easyengine.io/tutorials/linux/increase-open-files-limit/). If for any reason you don't want to or
-can't change the system-wide limits, running `ulimit -n 60000` before running cellfinder should work. This setting
-will persist for the present shell session, but will have to be repeated if you open a new terminal.
+This is likely because your default limits are set too low.
+To fix this, follow the instructions [here](https://easyengine.io/tutorials/linux/increase-open-files-limit/).
+If for any reason you don't want to or can't change the system-wide limits, running `ulimit -n 60000` before running `cellfinder` should work.
+This setting will persist for the present shell session, but will have to be repeated if you open a new terminal.
-### error: unrecognized arguments:
+### error: unrecognized arguments
```bash
main.py: error: unrecognized arguments: data/dataset1
```
-If what comes after `urecognised arguements` looks to be the part of the filepath you entered, after a space,
-then you should enclose the full path in quotation marks. (**i.e. `"/path/to/my data"` not `path/to/my data`**) .
-Otherwise cellfinder will interpret the path as two inputs, separated by a space.\)
+If what comes after `urecognised arguements` looks to be the part of the filepath you entered, after a space, then you should enclose the full path in quotation marks.
+For example, use `"/path/to/my data"` not `path/to/my data`.
+Otherwise cellfinder will interpret the path as two inputs, separated by a space.
### CommandLineInputError: File path: cannot be found.
diff --git a/docs/source/documentation/cellfinder/troubleshooting/index.md b/docs/source/documentation/cellfinder/troubleshooting/index.md
index 78539a79..3f1aebdc 100644
--- a/docs/source/documentation/cellfinder/troubleshooting/index.md
+++ b/docs/source/documentation/cellfinder/troubleshooting/index.md
@@ -1,34 +1,39 @@
# Troubleshooting
## Improving algorithm performance
-cellfinder detects cells in a two-stage process, firstly cell candidates are detected. These are cell-like
-objects of approximately the correct intensity and size. These cell candidates are then classified into
-cells and artefacts by a deep learning step.
+
+`brainmapper` detects cells in a two-stage process, firstly cell candidates are detected.
+These are cell-like objects of approximately the correct intensity and size.
+These cell candidates are then classified into cells and artefacts by a deep learning step.
### Cell candidate detection
-If the initial cell candidate detection is not performing well,
-then we suggest adjusting the
-[cell detection parameters](/documentation/cellfinder/user-guide/napari-plugin/all-cell-detection-parameters). For a
-better understanding of what these parameters do, it may be useful to consult the
-[original PLOS Computational Biology paper](https://doi.org/10.1371/journal.pcbi.1009074).
+
+If the initial cell candidate detection is not performing well, then we suggest adjusting the [cell detection parameters](/documentation/cellfinder/user-guide/napari-plugin/all-cell-detection-parameters).
+For a better understanding of what these parameters do, it may be useful to consult the [original PLOS Computational Biology paper](https://doi.org/10.1371/journal.pcbi.1009074).
:::{hint}
-In general, false positives (non-cells being detected) is generally ok, as these will be refined in the
-classification step.
+In general, false positives (non-cells being detected) is generally ok, as these will be refined in the classification step.
:::
### Cell candidate classification
-The classification will use a pre-trained network by default that is included with the software. This network will
-usually need to be retrained. For more details, please see the guide to
-[retraining the pre-trained network](/documentation/cellfinder/user-guide/training-strategy).
-## Registration
+The classification will use a pre-trained network by default that is included with the software.
+This network will usually need to be retrained.
+For more details, please see the guide to [retraining the pre-trained network](/documentation/cellfinder/user-guide/training-strategy).
+
+## Registration
+
Please see the [brainreg troubleshooting guide](/documentation/brainreg/troubleshooting).
+
## Fixing technical problems
-As cellfinder relies on a number of third party libraries (notably [TensorFlow](https://www.tensorflow.org/),
-[CUDA](https://developer.nvidia.com/cuda-zone) and [cuDNN](https://developer.nvidia.com/cudnn))
-there may be issues while running the software.
+As `brainmapper` relies on a number of third party libraries, notably
+
+- [TensorFlow](https://www.tensorflow.org/),
+- [CUDA](https://developer.nvidia.com/cuda-zone),
+- [cuDNN](https://developer.nvidia.com/cudnn),
+
+there may be issues while running the software.
If you are having any issues, please see the following sections:
```{toctree}
@@ -40,4 +45,3 @@ error-messages
## Anything else
If you are still having trouble, please [get in touch](/contact).
-
diff --git a/docs/source/documentation/cellfinder/troubleshooting/speed-up.md b/docs/source/documentation/cellfinder/troubleshooting/speed-up.md
index 7f6fdbca..6a54bfcf 100644
--- a/docs/source/documentation/cellfinder/troubleshooting/speed-up.md
+++ b/docs/source/documentation/cellfinder/troubleshooting/speed-up.md
@@ -2,64 +2,59 @@
## Introduction
-Before trying to troubleshoot, **cellfinder can be slow**. Even on a very good desktop computer, a full analysis of a
-labelled mouse brain with many thousands of cells can take between 4-12 hours.
+Before trying to troubleshoot, **cellfinder can be slow**.
+Even on a very good desktop computer, a full analysis of a labelled mouse brain with many thousands of cells can take between 4-12 hours.
## Things to Try
### Use a better computer
-Annoying advice, but a bigger, better computer will likely speed up cellfinder. In particular, we recommend:
+Annoying advice, but a bigger, better computer will likely speed up cellfinder.
+In particular, we recommend:
+
* Multicore CPU (the more cores and the faster the better)
* A recent NVIDIA GPU (the more VRAM and CUDA cores the better).
* Plenty of RAM. If you want to register your images to an atlas, this can use up to 50GB of RAM (depending on the atlas)
* Fast local storage for your data (ideally SSD)
-
### Put your data on a fast hard drive
-If your data is on a "normal" spinning hard drive, and you have a solid-state drive (SSD) available, putting
-your data on there will likely speed things up.
+If your data is on a "normal" spinning hard drive, and you have a solid-state drive (SSD) available, putting your data on there will likely speed things up.
-If your data is on a network drive (microscopy facility server, institutional file storage, etc.),
-consider moving it to your local machine first.
+If your data is on a network drive (microscopy facility server, institutional file storage, etc.), consider moving it to your local machine first.
-If you're using a compute cluster, there is likely to be a specific fast data storage area for this,
-maybe called `scratch`. Ask your sysadmins for help
+If you're using a compute cluster, there is likely to be a specific fast data storage area for this, maybe called `scratch`. Ask your sysadmins for help
## Specific Issues
### Cell classification or training the network is slow
:::{hint}
-If you think that cellfinder is using the GPU properly, you can often increase the batch size used for training or
-inference. This will depend on your specific GPU, but for inference, batch sizes of up to 128
-often work well on modern GPUs with >8GB memory.
+If you think that cellfinder is using the GPU properly, you can often increase the batch size used for training or inference.
+This will depend on your specific GPU, but for inference, batch sizes of up to 128 often work well on modern GPUs with >8GB memory.
:::
-These steps may be slow if cellfinder is not properly using the GPU. If you have followed the instructions in
-[setting up your GPU](/documentation/setting-up/gpu), you may need to check that everything is configured properly:
-
-Open a terminal (or Anaconda Prompt):
+These steps may be slow if cellfinder is not properly using the GPU.
+If you have followed the instructions in [setting up your GPU](/documentation/setting-up/gpu), you may need to check that everything is configured properly.
:::{note}
As always, make sure your conda environment is activated
:::
-Start Python
+Open a terminal (or Anaconda Prompt), start Python,
```bash
python
```
-Check that tensorflow can use the GPU
+and check that tensorflow can use the GPU,
```python
import tensorflow as tf
tf.test.is_gpu_available()
```
-If you see something like this, then all is well.
+If you see something like the output below, then all is well.
```bash
2019-06-26 10:51:34.697900: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX512F
@@ -88,7 +83,6 @@ If you see something like this:
False
```
-Then your GPU is not properly configured. If you have followed everything in
-[setting up your GPU](/documentation/setting-up/gpu), please go speak to whoever administers your machine. If you're
-still stuck [get in touch](/contact), but there is a limited amount we can do to help configure your system.
-
+Then your GPU is not properly configured.
+If you have followed everything in [setting up your GPU](/documentation/setting-up/gpu), please go speak to whoever administers your machine.
+If you're still stuck [get in touch](/contact), but there is a limited amount we can do to help configure your system.
diff --git a/docs/source/documentation/cellfinder/user-guide/cellfinder-core.md b/docs/source/documentation/cellfinder/user-guide/cellfinder-core.md
index 3aa922c7..907ac31f 100644
--- a/docs/source/documentation/cellfinder/user-guide/cellfinder-core.md
+++ b/docs/source/documentation/cellfinder/user-guide/cellfinder-core.md
@@ -1,11 +1,12 @@
-# cellfinder-core API
+# cellfinder.core API
-The API is not yet fully documented. For an idea of what the parameters do, see the
-[documentation for the napari plugin](napari-plugin/index).
+The API is not yet fully documented.
+For an idea of what the parameters do, see the [documentation for the napari plugin](napari-plugin/index).
## To run the full pipeline (cell candidate detection and classification)
+
```python
-from cellfinder_core.main import main as cellfinder_run
+from cellfinder.core.main import main as cellfinder_run
import tifffile
signal_array = tifffile.imread("/path/to/signal_image.tif")
@@ -15,8 +16,7 @@ voxel_sizes = [5, 2, 2] # in microns
detected_cells = cellfinder_run(signal_array,background_array,voxel_sizes)
```
-The output is a list of
-[imlib Cell objects](https://github.com/brainglobe/brainglobe-utils/blob/main/brainglobe_utils/cells/cells.py).
+The output is a list of [imlib Cell objects](https://github.com/brainglobe/brainglobe-utils/blob/main/brainglobe_utils/cells/cells.py).
Each `Cell` has a centroid coordinate, and a type:
```python
@@ -24,8 +24,7 @@ print(detected_cells[0])
# Cell: x: 132, y: 308, z: 10, type: 2
```
-Cell type 2 is a "real" cell, and Cell type 1 is a "rejected" object (i.e.,
-not classified as a cell):
+Cell type 2 is a "real" cell, and Cell type 1 is a "rejected" object (i.e., not classified as a cell):
```python
from imlib.cells.cells import Cell
@@ -37,31 +36,31 @@ print(Cell.NO_CELL)
```
## Saving the results
-If you want to save the detected cells for use in other BrainGlobe software (e.g. the napari plugin)
-you can save in the cellfinder XML standard:
+
+If you want to save the detected cells for use in other BrainGlobe software (e.g. the napari plugin), you can save in the cellfinder XML standard:
+
```python
from imlib.IO.cells import save_cells
save_cells(detected_cells, "/path/to/cells.xml")
```
+
You can load these back with:
+
```python
from imlib.IO.cells import get_cells
cells = get_cells("/path/to/cells.xml")
```
-
## Using dask for lazy loading
-`cellfinder-core` supports most array-like objects. Using
-[Dask arrays](https://docs.dask.org/en/latest/array.html) allows for lazy
-loading of data, allowing large (e.g. TB) datasets to be processed.
-`cellfinder-core` comes with a function
-(based on [napari-ndtiffs](https://github.com/tlambert03/napari-ndtiffs)) to
-load a series of image files (e.g. a directory of 2D tiff files) as a Dask
-array. `cellfinder-core` can then be used in the same way as with a numpy array.
+
+`cellfinder.core` supports most array-like objects.
+Using [Dask arrays](https://docs.dask.org/en/latest/array.html) allows for lazy loading of data, allowing large (e.g. TB) datasets to be processed.
+`cellfinder.core` comes with a function (based on [napari-ndtiffs](https://github.com/tlambert03/napari-ndtiffs)) to load a series of image files (e.g. a directory of 2D tiff files) as a Dask array.
+`cellfinder.core` can then be used in the same way as with a numpy array.
```python
-from cellfinder_core.main import main as cellfinder_run
-from cellfinder_core.tools.IO import read_with_dask
+from cellfinder.core.main import main as cellfinder_run
+from cellfinder.core.tools.IO import read_with_dask
signal_array = read_with_dask("/path/to/signal_image_directory")
background_array = read_with_dask("/path/to/background_image_directory")
@@ -70,14 +69,15 @@ voxel_sizes = [5, 2, 2] # in microns
detected_cells = cellfinder_run(signal_array,background_array,voxel_sizes)
```
-## Running the cell candidate detection and classification separately.
+## Running the cell candidate detection and classification separately
+
```python
import tifffile
from pathlib import Path
-from cellfinder_core.detect import detect
-from cellfinder_core.classify import classify
-from cellfinder_core.tools.prep import prep_classification
+from cellfinder.core.detect import detect
+from cellfinder.core.classify import classify
+from cellfinder.core.tools.prep import prep_classification
signal_array = tifffile.imread("/path/to/signal_image.tif")
background_array = tifffile.imread("/path/to/background_image.tif")
@@ -144,22 +144,19 @@ if len(cell_candidates) > 0: # Don't run if there's nothing to classify
network_depth,
)
```
+
## Training the network
-The training data needed are matched pairs (signal & background) of small
-(usually 50 x 50 x 100μm) images centered on the coordinate of candidate cells.
+
+The training data needed are matched pairs (signal & background) of small (usually 50 x 50 x 100μm) images centred on the coordinate of candidate cells.
These can be generated however you like, but we recommend the [napari plugin](napari-plugin/training-data-generation).
-`cellfinder-core` comes with a 50-layer ResNet trained on ~100,000 data points
-from serial two-photon microscopy images of mouse brains
-(available [here](https://gin.g-node.org/cellfinder/training_data)).
+`cellfinder.core` comes with a 50-layer ResNet trained on ~100,000 data points from serial two-photon microscopy images of mouse brains (available [here](https://gin.g-node.org/cellfinder/training_data)).
-Training the network is likely simpler using the
-[napari plugin](napari-plugin/training-the-network),
-but it is possible through the Python API.
+Training the network is likely simpler using the [napari plugin](napari-plugin/training-the-network), but it is possible through the Python API.
```python
from pathlib import Path
-from cellfinder_core.train.train_yml import run as run_training
+from cellfinder.core.train.train_yml import run as run_training
# list of training yml files
yaml_files = [Path("/path/to/training_yml.yml)]
diff --git a/docs/source/documentation/cellfinder/user-guide/command-line/cli.md b/docs/source/documentation/cellfinder/user-guide/command-line/cli.md
deleted file mode 100644
index f2f8bc5b..00000000
--- a/docs/source/documentation/cellfinder/user-guide/command-line/cli.md
+++ /dev/null
@@ -1,83 +0,0 @@
-# Command line interface
-## Basic usage
-
-To run cellfinder, use this general syntax
-
-```bash
- cellfinder -s signal_channel_images optional_signal_channel_images -b background_channel_images -o /path/to/output_directory -v 5 2 2 --orientation asl
-```
-
-:::{hint}
-All options can be found by running `cellfinder -h`
-:::
-
-## Arguments
-
-### Mandatory
-
-* `-s` or `--signal-planes-paths` Path to the directory of the signal files. Can also be a text file pointing to the
-files. **There can be as many signal channels as you like, and each will be treated independently**.
-* `-b` or `--background-planes-path` Path to the directory of the background files. Can also be a text file pointing to
-the files. **This background channel will be used for all signal channels**
-* `-o` or `--output-dir` Output directory for all intermediate and final results
-
-:::{caution}
-You must also specify the orientation and voxel size of your data, see
-[Image definition](/documentation/setting-up/image-definition).
-:::
-
-### The following options can also be used:
-
-**Only run parts of cellfinder**
-
-If for some reason you don't want some parts of cellfinder to run, you can use the following options.
-If a part of the pipeline is required by another part it will be run (i.e. `--no-detection` won't do
-anything unless `--no-classification` is also used). cellfinder will attempt to work out what parts of the
-pipeline have already been run (in a given output directory) and not run them again if appropriate.
-
-* `--no-register` Do not run registration
-* `--no-detection` Do not run cell candidate detection
-* `--no-classification` Do not run cell classification
-* `--no-analyse` Do not analyse and export cell positions
-* `--no-figures` Do not create figures (e.g. heatmap)
-
-**Figures options**
-
-Figures cannot yet be customised much, but the current options are here:
-
-* `--heatmap-smoothing` Gaussian smoothing sigma, in um.
-* `--no-mask-figs` Don't mask the figures (removing any areas outside the brain, from e.g. smoothing)
-
-**Performance, debugging and testing**
-
-* `--debug` Increase verbosity of statements printed to console and save all intermediate files.
-* `--n-free-cpus` The number of CPU cores on the machine to leave unused by the program to spare resources.
-* `--max-ram` Maximum amount of RAM to use (in GB) — **not currently fully implemented for all parts of cellfinder**
-
-Useful for testing or if you know your cells are only in a specific region
-
-* `--start-plane` The first plane to process in the Z dimension
-* `--end-plane` The last plane to process in the Z dimension
-
-**Standard space options**
-
-* `--transform-all` Transform all cell positions (including artifacts).
-
-## Additional options
-```{toctree}
-:maxdepth: 1
-candidate-detection
-classification
-/documentation/brainreg/user-guide/parameters
-```
-
-:::{hint}
-If you're using cellfinder at the [Sainsbury Wellcome Centre](https://www.sainsburywellcome.org/web/),
-you may wish to see the [instructions for using cellfinder on the SWC HPC system](using-cellfinder-at-the-swc).
-:::
-
-```{toctree}
-:maxdepth: 1
-:hidden:
-using-cellfinder-at-the-swc
-```
diff --git a/docs/source/documentation/cellfinder/user-guide/command-line/data-requirements.md b/docs/source/documentation/cellfinder/user-guide/command-line/data-requirements.md
deleted file mode 100644
index e3402953..00000000
--- a/docs/source/documentation/cellfinder/user-guide/command-line/data-requirements.md
+++ /dev/null
@@ -1,41 +0,0 @@
-# Data requirements
-What kind of data does cellfinder support?
-## Introduction
-
-cellfinder was written to analyse certain kinds of whole brain microscopy datasets (e.g. serial two-photon or
-lightsheet in cleared tissue). Although we are working on supporting different kinds of data, currently,
-the data must fit these criteria:
-
-### Image channels
-
-For registration, you only need a single channel, but this is ideally a "background" channel, i.e., one with only
-autofluorescence, and no other strong signal. Typically, we acquire the "signal" channels with red or green
-filters, and then the "background" channel with blue filters.
-
-For cell detection, you will need two channels, the "signal" channel, and the "background" channel.
-The signal channel should contain brightly labelled cells (e.g. from staining or viral injections). The models
-supplied with cellfinder were trained on whole-cell labels, so if you have e.g. a nuclear marker, they will need to
-be retrained (see [Training the network](training/index). However, realistically, the network will need to be
-retrained for every new application
-
-### Image structure
-
-Although we hope to support more varied types of data soon, your images must currently:
-
-* Cover the entire brain
-* Be of sufficiently high resolution that cells appear in multiple planes (i.e. 10μm axial spacing)
-* Contain planes that are registered to each other (i.e., this is often not the case with slide
-* scanners or manual acquisition)
-
-### Organisation
-
-cellfinder expects that your data will be stored as a series of 2D tiff files. These can be in a single directory,
-or you can generate a text file that points to them. Different channels in your dataset must be in different
-directories or text files.
-
-:::{caution}
-Please ensure that none of the files or folders that you pass to cellfinder have a space in them.
-This should be fixed in the future, but for now, please use `/the_path/to/my_data` rather than `/the path/to/my data`
-:::
-
-
diff --git a/docs/source/documentation/cellfinder/user-guide/command-line/docker.md b/docs/source/documentation/cellfinder/user-guide/command-line/docker.md
deleted file mode 100644
index d5e07f9b..00000000
--- a/docs/source/documentation/cellfinder/user-guide/command-line/docker.md
+++ /dev/null
@@ -1,30 +0,0 @@
-# Use with docker
-
-## Prerequisites
-
-* Linux machine ([most common distributions are supported](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html))
-* Recent NVIDIA GPU ([compute capability](https://en.wikipedia.org/wiki/CUDA) >=3)
-* [NVIDIA driver](https://www.nvidia.co.uk/Download/index.aspx?lang=en-uk) >= 418.81.07
-* [Docker version](https://docs.docker.com/engine/install/) >= 19.03
-
-## Setup
-
-**Install NVIDIA Container Toolkit**
-
-Full instructions are [here](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html).
-
-## Running cellfinder
-
-To run with GPU support, and mount the current working directory at `/data`**:**
-
-```bash
-docker container run --mount type=bind,source=${PWD},target=/data --gpus all -it ghcr.io/brainglobe/cellfinder
-```
-
-This will open up a bash prompt, and you can use cellfinder (or brainreg etc.) to analyse your data (mounted at `/data`) as normal, e.g.:
-
-```bash
-cellfinder -s /data/brain1/channel0 -b /data/brain1/channel1 -v 5 2 2 --orientation psl -o /data/analysis/brain1 --trained-model /data/models/retrained.h5
-```
-
-To leave the docker container when done, just `exit`.The data will be saved onto the host system, at your current working directory (you can mount different directories, or multiple directories, see the [docker documentation](https://docs.docker.com/storage/bind-mounts/)).
diff --git a/docs/source/documentation/cellfinder/user-guide/command-line/index.md b/docs/source/documentation/cellfinder/user-guide/command-line/index.md
deleted file mode 100644
index d8c55af6..00000000
--- a/docs/source/documentation/cellfinder/user-guide/command-line/index.md
+++ /dev/null
@@ -1,11 +0,0 @@
-# cellfinder command line tool
-
-```{toctree}
-:maxdepth: 1
-data-requirements
-cli
-visualisation
-output-files
-training/index
-docker
-```
diff --git a/docs/source/documentation/cellfinder/user-guide/command-line/output-files.md b/docs/source/documentation/cellfinder/user-guide/command-line/output-files.md
deleted file mode 100644
index d3972558..00000000
--- a/docs/source/documentation/cellfinder/user-guide/command-line/output-files.md
+++ /dev/null
@@ -1,40 +0,0 @@
-# Output files
-
-When you run cellfinder, depending on the options chosen, a number of files will be created. These may be useful for
-custom analyses (i.e. analysis not currently performed by cellfinder itself). The file descriptions are
-ordered by the subdirectory that they are found in, within the main cellfinder output directory.
-
-## Analysis
-
-* `summary.csv` - This file lists, for each brain area, the number of cells detected, the volume of the brain area,
-* and the density of cells (in cells per mm3).
-* `all_points.csv` - This file lists every detected cell, and it's coordinate in both the raw data and atlas spaces,
-* along with the brain structure it is found in.
-
-## Figures
-
-* `heatmap.tiff` - This is a heatmap (in the coordinate space of the downsampled, reoriented data) representing cell
-* densities across the brain.
-
-## Points
-
-* `cells.xml` - Cell candidate positions in the coordinate space of the raw data
-* `cell_classification.xml` - Same as `cells.xml`, but after classification (i.e., each cell candidate has a
-* cell/no_cell label)
-* `downsampled.points` - Detected cell coordinates, in the coordinate space of the raw data, but downsampled and
-* reoriented to match the atlas (but not yet warped to the atlas). This can be loaded with `pandas.read_hdf()`
-* `atlas.points` - As above, but warped to the atlas. This can be loaded with `pandas.read_hdf()`
-* `points.npy` - Cell coordinates, transformed into atlas space, for visualisation using
-* [brainrender](https://github.com/brainglobe/brainrender)
-* `abc4d.npy` - Exported file for use with [abc4d](https://github.com/valeriabonapersona/abc4d)
-
-## Registration
-
-The registration directory is simply a brainreg output directory. To understand these files, please see the
-[brainreg output files](/documentation/brainreg/user-guide/output-files/) page.
-
-
-Two other files are also saved, `cellfinder.json` and `cellfinder_DATE_TIME.log`. These files contain information
-about how cellfinder was run, and are useful for troubleshooting and debugging. If you ask for help (e.g.
-on the [image.sc. forum](https://forum.image.sc/tag/brainglobe)), we may ask you to send the log file.
-
diff --git a/docs/source/documentation/cellfinder/user-guide/command-line/using-cellfinder-at-the-swc.md b/docs/source/documentation/cellfinder/user-guide/command-line/using-cellfinder-at-the-swc.md
deleted file mode 100644
index 87d084cc..00000000
--- a/docs/source/documentation/cellfinder/user-guide/command-line/using-cellfinder-at-the-swc.md
+++ /dev/null
@@ -1,68 +0,0 @@
-# Using cellfinder at the SWC
-
-N.B. Before starting to use cellfinder on the SWC. It is recommended to familarise yourself with the job scheduler system
-([SLURM](https://slurm.schedmd.com/documentation.html)).
-
-:::{hint}
-This information refers to using cellfinder for cell detection and registration, but any of the BrainGlobe
-command-line tools (e.g. training cellfinder, or running only registration with brainreg) can be used similarly.
-:::
-
-## Interactive use
-
-On the SWC cluster, no software needs to be installed, as cellfinder can be loaded with `module load brainglobe`.
-
-Cellfinder can be used interactively, by starting an interactive job:
-
-```bash
-srun -p gpu --gres=gpu:1 -n 20 -t 0-24:00 --pty --mem=120G bash -i
-```
-
-Loading cellfinder:
-
-```bash
-module load brainglobe
-```
-
-And then running cellfinder as per the [User guide](index).
-
-## Batch processing
-
-It is recommended to use cellfinder by using the batch submission system. This has many advantages:
-
-Your analysis is reproducible \(you have a script showing exactly what you did\)
-* You don't need to wait for computing resources to become available \(once submitted, the job will wait until it can be run\)
-* If for any reason the analysis is interrupted, you can easily restart
-* You don't need to keep a connection to the cluster open
-* You can easily receive email updates when the job starts and finishes.
-
-An example batch script is given below, but it is recommended to familarise yourself with the batch submission system before trying to optimise cellfinder.
-
-```bash
-#!/bin/bash
-
-#SBATCH -p gpu # partition (queue)
-#SBATCH -N 1 # number of nodes
-#SBATCH --mem 120G # memory pool for all cores
-#SBATCH --gres=gpu:1
-#SBATCH -n 10
-#SBATCH -t 1-0:0 # time (D-HH:MM)
-#SBATCH -o cellfinder.out
-#SBATCH -e cellfinder.err
-#SBATCH --mail-type=ALL
-#SBATCH --mail-user=youremail@domain.com
-
-cell_file='/path/to/signal/channel'
-background_file='path/to/background/channel'
-output_dir='/path/to/output/directory'
-
-echo "Loading brainglobe environment"
-module load brainglobe
-
-echo "Running cellfinder"
-# Just an example. See the user guide for the specific parameters
-cellfinder -s $cell_file -b $background_file -o $output_dir -v 5 2 2 --orientation psl
-```
-
-
-
diff --git a/docs/source/documentation/cellfinder/user-guide/command-line/visualisation.md b/docs/source/documentation/cellfinder/user-guide/command-line/visualisation.md
deleted file mode 100644
index 072bf104..00000000
--- a/docs/source/documentation/cellfinder/user-guide/command-line/visualisation.md
+++ /dev/null
@@ -1,28 +0,0 @@
-# Visualisation
-cellfinder comes with a plugin for napari to allow you to easily view results.
-
-## Getting started
-
-To view your data, firstly, open napari. The easiest way to do this is open a terminal (making sure your cellfinder
-conda environment is activated), then just type `napari`. A napari window should open.
-
-## Visualising your raw data
-
-Assuming that your raw data is stored as `.tiff` files, drag these into napari (onto the main window in the middle).
-This should be whatever you passed to cellfinder originally, i.e., a single multipage tiff, or a directory of 2D tiffs.
-You can load as many channels as you like (e.g., the signal and the background channel).
-
-![Loading raw data into napari](/documentation/cellfinder/images/load_data.gif)
-
-## Visualising your results
-
-You can then drag and drop the cellfinder output directory (the one you specified with the `-o` flag)
-into the napari window. The plugin will then load your detected cells (in yellow) and the rejected cell
-candidates (in blue). If you carried out registration, then these results will be overlaid (similarly to the
-[brainreg plugin](/documentation/brainreg/user-guide/visualisation), but transformed to the coordinate space of
-your raw data).
-
-![Visualising cellfinder results. ](/documentation/cellfinder/images/load_results.gif)
-
-
-
diff --git a/docs/source/documentation/cellfinder/user-guide/napari-plugin/all-cell-detection-parameters.md b/docs/source/documentation/cellfinder/user-guide/napari-plugin/all-cell-detection-parameters.md
index 7aef092b..65791116 100644
--- a/docs/source/documentation/cellfinder/user-guide/napari-plugin/all-cell-detection-parameters.md
+++ b/docs/source/documentation/cellfinder/user-guide/napari-plugin/all-cell-detection-parameters.md
@@ -1,32 +1,35 @@
# All cell detection parameters
+
Details on all the cellfinder cell detection parameters.
-**Mandatory**
-* **Signal image** - set this to the image layer containing the labelled cells
-* **Background image** - set this to the image layer without cells
-* **Voxel size (z)** - in microns, the plane-spacing (from one 2D section to the next)
-* **Voxel size (y)** - in microns, the voxel size in the vertical (top to bottom) dimension
-* **Voxel size (x)** - in microns, the voxel size in the horizontal (left to right) dimension
+## Mandatory
+
+- **Signal image** - set this to the image layer containing the labelled cells
+- **Background image** - set this to the image layer without cells
+- **Voxel size (z)** - in microns, the plane-spacing (from one 2D section to the next)
+- **Voxel size (y)** - in microns, the voxel size in the vertical (top to bottom) dimension
+- **Voxel size (x)** - in microns, the voxel size in the horizontal (left to right) dimension
-**Optional**
+## Optional
-* **Soma diameter** - The expected soma size in um in the x/y dimensions. **Default 16**
-* **Ball filter (xy)** - The size in um of the ball used for the morphological filter in the x/y dimensions. **Default: 6**
-* **Ball filter (x)** - The size in um of the ball used for the morphological filter in the z dimension. **Default: 15**
-* **Filter width** - The filter size used in the Laplacian of Gaussian filter to enhance the cell intensities. Given as a fraction of the soma-diameter. **Default: 0.2**
-* **Threshold** - The cell intensity threshold, in multiples of the standard deviation above the mean. **Default: 10**
-* **Cell spread** - Soma size spread factor (for splitting up cell clusters). **Default: 1.4**
-* **Max cluster** - Largest putative cell cluster (in cubic um) where splitting should be attempted. **Default: 100000**
-* **Trained model** - To use your own network (not the one supplied with cellfinder) specify the model file.
+- **Soma diameter** - The expected soma size in um in the x/y dimensions. **Default 16**
+- **Ball filter (xy)** - The size in um of the ball used for the morphological filter in the x/y dimensions. **Default: 6**
+- **Ball filter (x)** - The size in um of the ball used for the morphological filter in the z dimension. **Default: 15**
+- **Filter width** - The filter size used in the Laplacian of Gaussian filter to enhance the cell intensities. Given as a fraction of the soma-diameter. **Default: 0.2**
+- **Threshold** - The cell intensity threshold, in multiples of the standard deviation above the mean. **Default: 10**
+- **Cell spread** - Soma size spread factor (for splitting up cell clusters). **Default: 1.4**
+- **Max cluster** - Largest putative cell cluster (in cubic um) where splitting should be attempted. **Default: 100000**
+- **Trained model** - To use your own network (not the one supplied with cellfinder) specify the model file.
-**Misc options**
+## Misc options
-* To only analyse a limited number of planes (e.g., for speed during optimisation) you can:
- * Tick the **Analyse local** box. This will only process the planes around the currently selected plane
- * Set the **Start plane** and **End plane**, to e.g. 1000 and 1100, to only process the 100 planes between 1000 and 1100.
-* To ensure that cellfinder doesn't use all the CPU cores on a machine, the **Number of free cpus** can be set. **Default: 2**
-* To increase the logging (e.g. for troubleshooting), tick the **Debug** box.
+- To only analyse a limited number of planes (e.g., for speed during optimisation) you can:
+ - Tick the **Analyse local** box. This will only process the planes around the currently selected plane
+ - Set the **Start plane** and **End plane**, to e.g. 1000 and 1100, to only process the 100 planes between 1000 and 1100.
+- To ensure that cellfinder doesn't use all the CPU cores on a machine, the **Number of free cpus** can be set. **Default: 2**
+- To increase the logging (e.g. for troubleshooting), tick the **Debug** box.
:::{hint}
-The parameter values will be saved between sessions. The values can be reset by clicking the **Reset defaults** button.
-:::
\ No newline at end of file
+The parameter values will be saved between sessions.
+The values can be reset by clicking the **Reset defaults** button.
+:::
diff --git a/docs/source/documentation/cellfinder/user-guide/napari-plugin/cell-detection.md b/docs/source/documentation/cellfinder/user-guide/napari-plugin/cell-detection.md
index 19710bae..58e9b641 100644
--- a/docs/source/documentation/cellfinder/user-guide/napari-plugin/cell-detection.md
+++ b/docs/source/documentation/cellfinder/user-guide/napari-plugin/cell-detection.md
@@ -2,55 +2,50 @@
## Loading data
-Once napari, and the cellfinder plugin is installed, open napari, and load the plugin
-(`Plugins` -> `cellfinder-napari` -> `Cell detection`).
+Once napari, and the `cellfinder` plugin is installed, open napari, and load the plugin (`Plugins` -> `cellfinder` -> `Cell detection`).
:::{hint}
-A widget should then be docked into the side of your napari window. If this doesn't happen,
-check for any errors (`Plugins` -> `Plugin Errors`, then select the cellfinder plugin from the drop-down menu).
+A widget should then be docked into the side of your napari window.
+If this doesn't happen, check for any errors (`Plugins` -> `Plugin Errors`, then select the cellfinder plugin from the drop-down menu).
This error can be used to report problems on the GitHub page, or the help forum, see sidebar for links.
:::
-Then load your data (e.g. using the `File`-> menu, or by dragging and dropping data). There must be two
-registered channels, a signal channel (containing fluorescently labelled cells), and a background channel
-(containing only autofluorescence).
+Then load your data (e.g. using the `File`-> menu, or by dragging and dropping data).
+There must be two registered channels; a signal channel (containing fluorescently labelled cells), and a background channel (containing only autofluorescence).
:::{hint}
-There are many napari plugins for loading data. By default, single 3D tiffs, and directories of tiffs can be loaded.
+There are many napari plugins for loading data.
+By default, single 3D tiffs, and directories of tiffs can be loaded.
:::
## Setting parameters
-There are many parameters that can be set (see [All cell detection parameters](all-cell-detection-parameters)), but the following options
-must be set before running the plugin
+There are many parameters that can be set (see [All cell detection parameters](all-cell-detection-parameters)), but the following options must be set before running the plugin.
-**Mandatory**
+### Mandatory parameters
* **Signal image** - set this to the image layer containing the labelled cells
* **Background image** - set this to the image layer without cells
* **Voxel size (z)** - in microns, the plane-spacing (from one 2D section to the next)
-* **Voxel size (y) **- in microns, the voxel size in the vertical (top to bottom) dimension
+* **Voxel size (y)** - in microns, the voxel size in the vertical (top to bottom) dimension
* **Voxel size (x)** - in microns, the voxel size in the horizontal (left to right) dimension
-## Running cellfinder
+## Running `cellfinder`
-Click the **Run** button.
-
-The plugin will then run (this may take a while if you're analysing a large dataset), and will produce two
-additional image layers:
+Click the **Run** button.
+The plugin will then run (this may take a while if you're analysing a large dataset), and will produce two additional image layers:
* **Detected** - these are the cell candidates classified as cells
* **Rejected** (hidden by default) - these are the cell candidates classified as artefacts.
:::{hint}
-It is likely that the classification will not perform well on new data, to improve this, see
-[Training data generation](training-data-generation).
+It is likely that the classification will not perform well on new data, to improve this, see [Training data generation](training-data-generation).
:::
## Saving data
-The cell coordinates can be saved using any napari plugin (e.g., to csv). To save the cell coordinates in the
-cellfinder XML format:
+The cell coordinates can be saved using any napari plugin (e.g., to csv).
+To save the cell coordinates in the cellfinder XML format:
* Select the Points layers (e.g. **Detected** and **Rejected**)
* Click `File` -> `Save Selected Layer(s)`
diff --git a/docs/source/documentation/cellfinder/user-guide/napari-plugin/index.md b/docs/source/documentation/cellfinder/user-guide/napari-plugin/index.md
index f5564a82..b96adf46 100644
--- a/docs/source/documentation/cellfinder/user-guide/napari-plugin/index.md
+++ b/docs/source/documentation/cellfinder/user-guide/napari-plugin/index.md
@@ -1,15 +1,14 @@
# cellfinder napari plugin
-The cellfinder algorithm is in two stages. Firstly, cell candidates (objects of roughly the correct size and intensity
-to be a cell) are detected, and then a deep learning network classifies these cell candidates as being cells or
-artefacts. Because this classification step will need to be retrained for new data, the napari plugin is split
-into three sections:
+The cellfinder algorithm is in two stages.
+Firstly, cell candidates (objects of roughly the correct size and intensity to be a cell) are detected, and then a deep learning network classifies these cell candidates as being cells or artefacts.
+Because this classification step will need to be retrained for new data, the napari plugin is split into three sections:
```{toctree}
:maxdepth: 1
cell-detection
-training-data-generation
training-the-network
+training-data-generation
```
```{toctree}
@@ -19,11 +18,5 @@ all-cell-detection-parameters
```
:::{hint}
-To understand how cellfinder works, it may be useful to take a look at the
-[original paper](https://doi.org/10.1371/journal.pcbi.1009074).
+To understand how cellfinder works, it may be useful to take a look at the [original paper](https://doi.org/10.1371/journal.pcbi.1009074).
:::
-
-
-
-
-
diff --git a/docs/source/documentation/cellfinder/user-guide/napari-plugin/training-data-generation.md b/docs/source/documentation/cellfinder/user-guide/napari-plugin/training-data-generation.md
index 54eadcc4..47536ff8 100644
--- a/docs/source/documentation/cellfinder/user-guide/napari-plugin/training-data-generation.md
+++ b/docs/source/documentation/cellfinder/user-guide/napari-plugin/training-data-generation.md
@@ -1,42 +1,39 @@
# Generating data to retrain the cellfinder classification network
-Before generating training data, please see the guide to
-[retraining the pre-trained network](/documentation/cellfinder/user-guide/training-strategy).
+
+Before generating training data, please see the guide to [retraining the pre-trained network](/documentation/cellfinder/user-guide/training-strategy).
## Loading data
If you've just run the cell detection step, proceed to **Annotating data.**
-
Otherwise:
-* If you're starting from scratch, open napari, and load the plugin (`Plugins` -> `Curation`).
-* As the curation step is based on previous results, load an XML file from a previous analysis
-(e.g. saved from napari, or from the cellfinder command line software).
-This can just be dragged onto the main napari canvas.
-* Load the raw image data corresponding to the XML file (both signal and background channels).
+- If you're starting from scratch, open napari, and load the plugin (`Plugins` -> `Curation`).
+- As the curation step is based on previous results, load an XML file from a previous analysis (e.g. saved from napari, or from the cellfinder command line software). This can just be dragged onto the main napari canvas.
+- Load the raw image data corresponding to the XML file (both signal and background channels).
## Annotating data
-* Set the **signal image** and **background image** layers from the dropdown boxes.
-* Either load previous training data layers, and set these in **Training data (cells)** and
-**Training data (non cells)**, or click **Add training data layers** which will add two new layers,
+- Set the **signal image** and **background image** layers from the dropdown boxes.
+- Either load previous training data layers, and set these in **Training data (cells)** and
+**Training data (non cells)**, or click **Add training data layers** which will add two new layers,
and set them for you.
-* Go through your data, selecting both correctly, and incorrectly classified cell candidates by:
- * Highlighting the Points layer they're in
- * Selecting points
- * Clicking **Mark as cell(s)** or **Mark as non-cell(s)**
- * Repeat until you are finished labelling
-* Save your training data annotations in case you want to come back to them later:
- * Select the points layers (e.g. **Training data (cells)** and **Training data (non-cells**)
- * Click `File` -> `Save Selected Layer(s)`
- * Save with `.xml` extension (e.g. `curated_cells.xml`)
+- Go through your data, selecting both correctly, and incorrectly classified cell candidates by:
+ - Highlighting the Points layer they're in
+ - Selecting points
+ - Clicking **Mark as cell(s)** or **Mark as non-cell(s)**
+ - Repeat until you are finished labelling
+- Save your training data annotations in case you want to come back to them later:
+ - Select the points layers (e.g. **Training data (cells)** and **Training data (non-cells**)
+ - Click `File` -> `Save Selected Layer(s)`
+ - Save with `.xml` extension (e.g. `curated_cells.xml`)
## Exporting data for training
-To retrain the network, the training data (small 3D images centered on each annotated cell
-candidate) must be saved. To do this:
+To retrain the network, the training data (small 3D images centered on each annotated cell candidate) must be saved.
+To do this:
-* Click **Save training data**
-* Choose (or create a new) directory
+- Click **Save training data**
+- Choose (or create a new) directory
This may take a while if you have lots of training data, or your data is slow to access (e.g. network drive).
diff --git a/docs/source/documentation/cellfinder/user-guide/napari-plugin/training-the-network.md b/docs/source/documentation/cellfinder/user-guide/napari-plugin/training-the-network.md
index 6b9debaa..3b6e9f0b 100644
--- a/docs/source/documentation/cellfinder/user-guide/napari-plugin/training-the-network.md
+++ b/docs/source/documentation/cellfinder/user-guide/napari-plugin/training-the-network.md
@@ -1,73 +1,60 @@
# Retraining the network for new data
-
-Once napari, and the cellfinder plugin is installed, open napari, and load the plugin
-(`Plugins` -> Train network).
+Once napari, and the `cellfinder` plugin is installed, open napari, and load the plugin (`Plugins` -> Train network).
:::{hint}
-Make sure your GPU is set up to speed up the training. See
-[Setting up your GPU](/documentation/setting-up/gpu).
+Make sure your GPU is set up to speed up the training.
+See [Setting up your GPU](/documentation/setting-up/gpu).
:::
## Set parameters
-### **Mandatory**
+### Mandatory
-* **YAML files** - choose at least one YAML file containing paths to training data
-* **Output directory** - Choose (or create new) directory to save the trained models to
+- **YAML files** - choose at least one YAML file containing paths to training data
+- **Output directory** - Choose (or create new) directory to save the trained models to
-### **Optional**
+### Optional
-**Network**
+#### Network
-* **Trained model** - Path to a trained model to continue training from
-* **Model weights** - Path to existing model weights to continue training
-* **Model depth** - Resnet depth (based on [He et al. (2015)](https://arxiv.org/abs/1512.03385)).
-Choose from 18, 34, 50, 101 or 152. In theory, a deeper network should classify better, at the expense
-of a larger model, and longer training time. **Default: 50**
-* **Pretrained model** - Choose an existing model supplied with the software to continue training from.
+- **Trained model** - Path to a trained model to continue training from
+- **Model weights** - Path to existing model weights to continue training
+- **Model depth** - Resnet depth (based on [He et al. (2015)](https://arxiv.org/abs/1512.03385)). Choose from 18, 34, 50, 101 or 152. In theory, a deeper network should classify better, at the expense of a larger model, and longer training time. **Default: 50**
+- **Pretrained model** - Choose an existing model supplied with the software to continue training from.
-When training your network, you can either train the network from scratch (not recommended), or
-select the **Continue training** box to retrain an existing network. Depending on how you want to
-train your network, different data or options must be supplied:
+When training your network, you can either train the network from scratch (not recommended), or select the **Continue training** box to retrain an existing network. Depending on how you want to train your network, different data or options must be supplied:
-* If you are training a new network from scratch (i.e. **Continue training** is not selected),
-then you only need to select a **Model depth**.
-* If you are continuing training from a default, pretrained model, only **Pretrained model** needs to be chosen.
-* If you are continuing training from your own model, then only **Trained model** needs to be set.
-* If you are continuing training from your own model weights (i.e. not the full model,
-saved when **Save weights** is checked).
+- If you are training a new network from scratch (i.e. **Continue training** is not selected), then you only need to select a **Model depth**.
+- If you are continuing training from a default, pretrained model, only **Pretrained model** needs to be chosen.
+- If you are continuing training from your own model, then only **Trained model** needs to be set.
+- If you are continuing training from your own model weights (i.e. not the full model, saved when **Save weights** is checked).
-**Training**
+#### Training
-* **Continue Training** - Continue training from an existing trained model. If no model or
-model weights are specified, this will continue from the included model.
-* **Augment** - Use data augmentation to synthetically increase the amount of training data
-* **Tensorboard** - Log to `output_directory/tensorboard`. Use
-`tensorboard --logdir outputdirectory/tensorboard` to view.
-* **Save weights** - Only store the model weights, and not the full model. Useful to save storage space.
-* **Save checkpoints** - Save the model after each training epoch. Each model file can be large, and if you don't
-have much training data, they can be generated quickly. Deselect if you are training for many epochs, and you are
-happy to wait for the chosen number of epochs to complete.
-* **Save progress** - Save training progress to a .csv file (`output_directory/training.csv`
-* **Epochs** - How many times to use each sample for training. **Default: 100**
-* **Learning rate** - Learning rate for training the model
-* **Batch size** - Batch size for training (how many cell candidates to process at once). **Default: 16**
-* **Test fraction** - What fraction of data to keep for validation. **Default: 0.1**
+- **Continue Training** - Continue training from an existing trained model. If no model or model weights are specified, this will continue from the included model.
+- **Augment** - Use data augmentation to synthetically increase the amount of training data
+- **Tensorboard** - Log to `output_directory/tensorboard`. Use `tensorboard --logdir outputdirectory/tensorboard` to view.
+- **Save weights** - Only store the model weights, and not the full model. Useful to save storage space.
+- **Save checkpoints** - Save the model after each training epoch. Each model file can be large, and if you don't have much training data, they can be generated quickly. Deselect if you are training for many epochs, and you are happy to wait for the chosen number of epochs to complete.
+- **Save progress** - Save training progress to a .csv file (`output_directory/training.csv`)
+- **Epochs** - How many times to use each sample for training. **Default: 100**
+- **Learning rate** - Learning rate for training the model
+- **Batch size** - Batch size for training (how many cell candidates to process at once). **Default: 16**
+- **Test fraction** - What fraction of data to keep for validation. **Default: 0.1**
-**Misc options**
+#### Misc options
-* To ensure that cellfinder doesn't use all the CPU cores on a machine, the
-**Number of free cpus** can be set. **Default: 2**
+- To ensure that cellfinder doesn't use all the CPU cores on a machine, the **Number of free cpus** can be set. **Default: 2**
:::{hint}
-Parameter values will be saved between sessions. The values can be reset by clicking the **Reset defaults** button.
+Parameter values will be saved between sessions.
+The values can be reset by clicking the **Reset defaults** button.
:::
## Run training
Click the **Run** button.
-The plugin will then run (this may take a while if you have lots of training data,
-or you have set many epochs). Trained models (`.h5` files) will be saved into your
-output directory, to be used for cell detection.
+The plugin will then run (this may take a while if you have lots of training data, or you have set many epochs).
+Trained models (`.h5` files) will be saved into your output directory, to be used for cell detection.
diff --git a/docs/source/documentation/cellfinder/user-guide/training-strategy.md b/docs/source/documentation/cellfinder/user-guide/training-strategy.md
index a7c5a2f3..1a902f8a 100644
--- a/docs/source/documentation/cellfinder/user-guide/training-strategy.md
+++ b/docs/source/documentation/cellfinder/user-guide/training-strategy.md
@@ -1,10 +1,8 @@
# Retraining the pre-trained network
-
Retraining the classification network is often the key step to ensure high-performance of cellfinder.
-We recommend that the presupplied network is retrained for each new application
-(e.g. microscope, labelling strategy etc.). The design of the cellfinder software means that this is
-different, but often simpler than other deep-learning-based analysis tools you may have used.
+We recommend that the presupplied network is retrained for each new application (e.g. microscope, labelling strategy etc.).
+The design of the cellfinder software means that this is different, but often simpler than other deep-learning-based analysis tools you may have used.
:::{hint}
It may be useful to consult the [original PLOS Computational Biology paper](https://doi.org/10.1371/journal.pcbi.1009074)
@@ -12,18 +10,22 @@ to get a better idea of the ideas behind the software.
:::
## Pre-trained network
-cellfinder is supplied with a network that was trained on approximately 100,000 manually annotated cell candidates
-(with a roughly 50/50 split between cells and non-cells). These came from serial-section two-photon data with
-whole-cell labelling. This is likely to be different from your data in some ways, e.g.:
+
+cellfinder is supplied with a network that was trained on approximately 100,000 manually annotated cell candidates (with a roughly 50/50 split between cells and non-cells).
+These came from serial-section two-photon data with whole-cell labelling.
+This is likely to be different from your data in some ways, e.g.:
+
- Microscopy technique
- Fluorescent label
- Labelled brain regions
- Labelled cell types
-However, we usually find that this network is a good starting point in a new analysis.
+However, we usually find that this network is a good starting point in a new analysis.
## Workflow
+
The typical workflow for using cellfinder on new data is:
+
1. Run cellfinder using the pre-trained network (or a network you or a collaborator has already trained)
2. Assess the performance of the network
3. Generate training data to "correct" the network in areas it has performed poorly
@@ -31,22 +33,22 @@ The typical workflow for using cellfinder on new data is:
5. Run cellfinder with the new network
6. Repeat steps 2 to 5
-Considering generating training data requires the input of a skilled human, but the other steps can be run automatically,
-we suggest that only small amounts of training data are generated at a time. Training data can be pooled from separate
-batches of annotations, so the network can be iteratively improved step by step.
+Considering generating training data requires the input of a skilled human, but the other steps can be run automatically, we suggest that only small amounts of training data are generated at a time.
+Training data can be pooled from separate batches of annotations, so the network can be iteratively improved step by step.
## Training data generation strategy
-There is no "correct" way to create training data, but it is usually best to target the areas in which the current
-network performs worse. Typically, we recommend generating 1000-5000 cell candidates with a roughly even split between
+
+There is no "correct" way to create training data, but it is usually best to target the areas in which the current network performs worse.
+Typically, we recommend generating 1000-5000 cell candidates with a roughly even split between;
+
- Correctly classified cells
- Correctly classified artefacts
- Incorrectly classified cells
- Incorrectly classified artefacts
This process would usually take 2-4 hours (with practice it usually becomes much quicker!).
-
-Once the network is retrained, the process can be repeated until performance is satisfactory for the application.
+Once the network is retrained, the process can be repeated until performance is satisfactory for the application.
:::{caution}
Make sure you save all your training data, you can re-use it later (or share it with others).
-:::
\ No newline at end of file
+:::
diff --git a/docs/source/tutorials/cellfinder-cli/exploring-the-numerical-results.md b/docs/source/tutorials/brainmapper/exploring-the-numerical-results.md
similarity index 100%
rename from docs/source/tutorials/cellfinder-cli/exploring-the-numerical-results.md
rename to docs/source/tutorials/brainmapper/exploring-the-numerical-results.md
diff --git a/docs/source/tutorials/brainmapper/index.md b/docs/source/tutorials/brainmapper/index.md
new file mode 100644
index 00000000..c63e4aa5
--- /dev/null
+++ b/docs/source/tutorials/brainmapper/index.md
@@ -0,0 +1,39 @@
+# Whole brain cell detection and registration with the `brainmapper` command line tool
+
+:::{note}
+
+`brainmapper` was previously called the `cellfinder` command-line interface tool.
+
+This command-line tool was renamed with the release of `cellfinder` version `1.0.0`.
+You can read about these changes [on our blog](/blog/version1/cellfinder_migration_live).
+
+If you have previously been using the cellfinder command-line interface in your work, you'll most likely want to follow the links in the blog post to:
+
+- Upgrade your version of the `cellfinder` package,
+- Install `brainglobe-workflows` to get `brainmapper`, the same command-line tool but under it's new name.
+
+::
+
+Although the `brainmapper` command line tool is designed to be easy to install and use, if you're coming to it with fresh eyes, it's not always clear where to start.
+We provide an example brain to get you started, and also to illustrate how to play with the parameters to better suit your data.
+
+:::{caution}
+**The test dataset is large** $\approx 250$GB.
+It is recommended that you try this tutorial out on the fastest machine you have, with the fastest hard drive possible (ideally SSD) and an NVIDIA GPU.
+:::
+
+## Tutorial
+
+The tutorial is quite long, and is split into a number of sections.
+Please be aware that downloading the data and running `brainmapper` may take a long time (e.g., overnight x2) if you don't have access to a particularly high-powered computer, or fast network connection.
+
+Please go through the following sections in order:
+
+```{toctree}
+:maxdepth: 1
+setting-up
+running-brainmapper
+visualising-the-results
+exploring-the-numerical-results
+visualising-your-data-in-brainrender
+```
diff --git a/docs/source/tutorials/cellfinder-cli/running-cellfinder.md b/docs/source/tutorials/brainmapper/running-brainmapper.md
similarity index 61%
rename from docs/source/tutorials/cellfinder-cli/running-cellfinder.md
rename to docs/source/tutorials/brainmapper/running-brainmapper.md
index 1d4d817c..813d4321 100644
--- a/docs/source/tutorials/cellfinder-cli/running-cellfinder.md
+++ b/docs/source/tutorials/brainmapper/running-brainmapper.md
@@ -1,6 +1,7 @@
-# Running cellfinder
+# Running `brainmapper`
-`cellfinder` runs with a single command, with various arguments that are detailed in [Command line options](/documentation/cellfinder/user-guide/command-line/cli). To analyse the example data, the flags we need are:
+`brainmapper` runs with a single command, with various arguments that are detailed in the [command line options](/documentation/brainglobe-workflows/brainmapper/cli).
+To analyse the example data, the flags we need are:
- `-s` The primary **s**ignal channel: `test_brain/ch00`.
- `-b` The secondary autofluorescence channel (or **b**ackground): `test_brain/ch01`.
@@ -16,7 +17,7 @@ If your machine has less than 32GB of RAM, you should use the `allen_mouse_25um`
Putting this all together into a single command gives:
```bash
-cellfinder -s test_brain/ch00 -b test_brain/ch01 -o test_brain/output -v 5 2 2 --orientation psl --atlas allen_mouse_10um
+brainmapper -s test_brain/ch00 -b test_brain/ch01 -o test_brain/output -v 5 2 2 --orientation psl --atlas allen_mouse_10um
```
This command will take quite a long time (anywhere from 2-10 hours) to run, depending on:
@@ -26,7 +27,7 @@ This command will take quite a long time (anywhere from 2-10 hours) to run, depe
- The GPU you have
:::{hint}
-You'll know `cellfinder` has finished when you see something like this:
+You'll know `brainmapper` has finished when you see something like this:
`2020-10-14 00:07:20 AM - INFO - MainProcess main.py:86 - Finished. Total time taken: 3:22:42`
:::
@@ -36,12 +37,12 @@ If you just want to check that everything is working, we can speed everything up
- Using a lower-resolution atlas, using the flag: `--atlas allen_mouse_25um`
```bash
-cellfinder -s test_brain/ch00 -b test_brain/ch01 -o test_brain/output -v 5 2 2 --orientation psl --atlas allen_mouse_25um --start-plane 1500 --end-plane 1550
+brainmapper -s test_brain/ch00 -b test_brain/ch01 -o test_brain/output -v 5 2 2 --orientation psl --atlas allen_mouse_25um --start-plane 1500 --end-plane 1550
```
:::{hint}
If the cell classification step takes a (very) long time, it may not be using the GPU.
-If you have an NVIDIA GPU, see [Speeding up cellfinder](/documentation/cellfinder/troubleshooting/speed-up) to make sure that your GPU is set up properly.
+If you have an NVIDIA GPU, see [Speeding up brainmapper](/documentation/cellfinder/troubleshooting/speed-up) to make sure that your GPU is set up properly.
:::
-Once cellfinder has run, you can go onto [Visualising the results](visualising-the-results).
+Once `brainmapper` has run, you can go onto [Visualising the results](visualising-the-results).
diff --git a/docs/source/tutorials/cellfinder-cli/setting-up.md b/docs/source/tutorials/brainmapper/setting-up.md
similarity index 74%
rename from docs/source/tutorials/cellfinder-cli/setting-up.md
rename to docs/source/tutorials/brainmapper/setting-up.md
index 4173700b..00908b5a 100644
--- a/docs/source/tutorials/cellfinder-cli/setting-up.md
+++ b/docs/source/tutorials/brainmapper/setting-up.md
@@ -2,7 +2,7 @@
## Installation and download
-- First install the `cellfinder` command line tool, following the [Installation guide](/documentation/cellfinder/installation).
+- First install the `brainmapper` command line tool by installing the `brainglobe-workflows` package, following the [installation guide](/documentation/brainglobe-workflows/index.md#installation).
- Download the data from [here](https://gin.g-node.org/cellfinder/data/raw/master/test\_brain\_SK\_AA\_71\_3.zip) (it will take a long time to download). Thanks to [Sepiedeh Keshavarzi](https://www.keshavarzilab.com/) for sharing the data.
- Unzip the data to a directory of your choice (doesn't matter where). You should end up with a directory called `test_brain` with two directories, each containing 2800 images.
- Open a terminal (Linux) or your command prompt (Windows).
@@ -14,7 +14,7 @@ The test data supplied is purposefully not the "best".
It has a low SNR, and some artefacts such as fluorescent vasculature, and bright spots on the surface of the brain.
In addition, the cell classification network was trained on different data, to give you an idea of "real world" performance.
-The aim of this tutorial is not to show `cellfinder` performing perfectly.
+The aim of this tutorial is not to show `brainmapper` performing perfectly.
It is instead to illustrate how it deals with less than perfect data, and how to improve the performance.
With all analysis methods, please test it out on your data to see if it works for you, and feel free [to get in touch](/contact).
@@ -22,13 +22,13 @@ With all analysis methods, please test it out on your data to see if it works fo
## Before you start
-To run `cellfinder`, you need to know:
+To run `brainmapper`, you need to know:
- Where your data is (in this case, it's the path to the `test_brain` directory).
- Which image is the primary signal channel (the one with the labelled cells) and which is the secondary autofluorescence channel. In this case, `test_brain/ch00` is the signal channel and `test_brain/ch01` is the autofluroescence channel.
-- Where you want to save the output data (we'll just save it into a directory called `cellfinder_output`in the same directory as the `test_brain`).
+- Where you want to save the output data (we'll just save it into a directory called `brainmapper_output` in the same directory as the `test_brain`).
- The pixel sizes of your data in microns (see [Image definition](/documentation/setting-up/image-definition) for details). In this case, our data is 2μm per pixel in the coronal plane and the spacing of each plane is 5μm.
-- The orientation of your data. For atlas registration (using [brainreg](/documentation/brainreg/index)) the software needs to know how you acquired your data (coronal, sagittal etc.). For this cellfinder uses [bg-space](/documentation/bg-space/index). Full details on how to enter your data orientation can also be found in the [Image definition](/documentation/setting-up/image-definition) section. For this tutorial, the orientation is `psl`, which means that the data origin is the most **p**osterior, **s**uperior, **l**eft voxel.
+- The orientation of your data. For atlas registration (using [brainreg](/documentation/brainreg/index)) the software needs to know how you acquired your data (coronal, sagittal etc.). For this `brainmapper` uses [bg-space](/documentation/bg-space/index). Full details on how to enter your data orientation can also be found in the [Image definition](/documentation/setting-up/image-definition) section. For this tutorial, the orientation is `psl`, which means that the data origin is the most **p**osterior, **s**uperior, **l**eft voxel.
- Which atlas you want to use (you can see which are available by running `brainglobe list`). In this case, we want to use a mouse atlas (as that's what our data is), and we'll use the 10μm version of the [Allen Mouse Brain Atlas](https://mouse.brain-map.org/static/atlas).
-Now you're ready to start [Running cellfinder](running-cellfinder).
+Now you're ready to start [Running brainmapper](running-brainmapper).
diff --git a/docs/source/tutorials/cellfinder-cli/visualising-the-results.md b/docs/source/tutorials/brainmapper/visualising-the-results.md
similarity index 72%
rename from docs/source/tutorials/cellfinder-cli/visualising-the-results.md
rename to docs/source/tutorials/brainmapper/visualising-the-results.md
index 4229db57..556c3e61 100644
--- a/docs/source/tutorials/cellfinder-cli/visualising-the-results.md
+++ b/docs/source/tutorials/brainmapper/visualising-the-results.md
@@ -4,16 +4,16 @@ description: How to inspect the results in napari
# Visualising the results
-`cellfinder` comes with a plugin for [napari](https://napari.org/) for easily visualising the results.
-For more information, see [Visualisation](/documentation/cellfinder/user-guide/command-line/visualisation).
+`brainmapper` comes with a plugin for [napari](https://napari.org/) for easily visualising the results.
+For more information, see the [visualisation](/documentation/brainglobe-workflows/brainmapper/visualisation.md) section.
To quickly view your data:
- Open napari (type `napari` into a command window).
- Into the window then drag and drop:
- The signal channel directory (`test_brain/ch00`),
- - The entire cellfinder output directory.
+ - The entire brainmapper output directory.
-![cellfinder results viewed in napari](../images/cellfinder_results.png)
+![brainmapper results viewed in napari](../images/brainmapper_results.png)
The napari window then will then be populated with different layers (left-hand side) that can be toggled:
@@ -23,7 +23,7 @@ The napari window then will then be populated with different layers (left-hand s
- `Non cells` The cell candidates classified as artefacts (blue).
- `Cells` The cell candidates classified as cells (yellow).
-If you click on the image above to enlarge, you should get a good idea of how `cellfinder` works:
+If you click on the image above to enlarge, you should get a good idea of how `brainmapper` works:
- The coloured regions and the outlines show the segmentation of the brain (following atlas registration).
- The yellow circles show the detected cells (mostly in retrosplenial cortex and thalamus). There are also a few false positives (such as three on the surface of the brain and one outside the brain). This shows that the cell classification network (trained on other brains) is not quite 100%, and should be retrained with the addition of some data from this brain.
@@ -33,5 +33,5 @@ If you click on the image above to enlarge, you should get a good idea of how `c
To make the results a bit more obvious when zoomed out, the contrast of the raw data (`ch00`) has been adjusted along with changing the symbol for the cells to `disc` and increasing the size.
::
-These images are useful to assess how well `cellfinder` performed, but not much use for any kind of numerical analysis.
-To see what data is exported from cellfinder, take a look at [Exploring the numerical results](exploring-the-numerical-results).
+These images are useful to assess how well `brainmapper` performed, but not much use for any kind of numerical analysis.
+To see what data is exported from `brainmapper`, take a look at [Exploring the numerical results](exploring-the-numerical-results).
diff --git a/docs/source/tutorials/cellfinder-cli/visualising-your-data-in-brainrender.md b/docs/source/tutorials/brainmapper/visualising-your-data-in-brainrender.md
similarity index 87%
rename from docs/source/tutorials/cellfinder-cli/visualising-your-data-in-brainrender.md
rename to docs/source/tutorials/brainmapper/visualising-your-data-in-brainrender.md
index ae09a62d..ee75558f 100644
--- a/docs/source/tutorials/cellfinder-cli/visualising-your-data-in-brainrender.md
+++ b/docs/source/tutorials/brainmapper/visualising-your-data-in-brainrender.md
@@ -1,9 +1,9 @@
# Visualising your data in brainrender
-
+
To generate 3D figures of your data in atlas space, you can use [brainrender](/documentation/brainrender/index).
-cellfinder automatically exports a file in a brainrender compatible format, which can be found at
+brainmapper automatically exports a file in a brainrender compatible format, which can be found at
`test_brain/output/points/points.npy`.
Once you've [installed brainrender](/documentation/brainrender/installation), you can try something like this:
diff --git a/docs/source/tutorials/cellfinder-cli/index.md b/docs/source/tutorials/cellfinder-cli/index.md
deleted file mode 100644
index 73b86a7d..00000000
--- a/docs/source/tutorials/cellfinder-cli/index.md
+++ /dev/null
@@ -1,24 +0,0 @@
-# Whole brain cell detection and registration with the `cellfinder` command line tool
-
-Although the `cellfinder` command line tool is designed to be easy to install and use, if you're coming to it with fresh eyes, it's not always clear where to start. We provide an example brain to get you started, and also to illustrate how to play with the parameters to better suit your data.
-
-:::{caution}
-**The test dataset is large** \(~250GB\).
-It is recommended that you try this tutorial out on the fastest machine you have, with the fastest hard drive possible (ideally SSD) and an NVIDIA GPU.
-:::
-
-## Tutorial
-
-The tutorial is quite long, and is split into a number of sections.
-Please be aware that downloading the data and running `cellfinder` may take a long time (e.g., overnight x2) if you don't have access to a particularly high-powered computer, or fast network connection.
-
-Please go through the following sections in order:
-
-```{toctree}
-:maxdepth: 1
-setting-up
-running-cellfinder
-visualising-the-results
-exploring-the-numerical-results
-visualising-your-data-in-brainrender
-```
diff --git a/docs/source/tutorials/images/cellfinder_results.png b/docs/source/tutorials/images/brainmapper_results.png
similarity index 100%
rename from docs/source/tutorials/images/cellfinder_results.png
rename to docs/source/tutorials/images/brainmapper_results.png
diff --git a/docs/source/tutorials/index.md b/docs/source/tutorials/index.md
index 2c853c3c..d7dcb6d4 100644
--- a/docs/source/tutorials/index.md
+++ b/docs/source/tutorials/index.md
@@ -1,6 +1,7 @@
# Tutorials
## Getting started
+
::::{grid} 1 2 2 3
:gutter: 3
@@ -48,10 +49,10 @@ Retraining the cellfinder cell classification network in napari
::::
## Specific applications
+
::::{grid} 1 2 2 3
:gutter: 3
-
:::{grid-item-card} {fas}`brain;sd-text-primary` Probe segmentation
:img-bottom: images/probes.png
:link: silicon-probe-tracking
@@ -59,7 +60,6 @@ Retraining the cellfinder cell classification network in napari
Analysis of silicon probe tracks (e.g. Neuropixels)
:::
-
:::{grid-item-card} {fas}`brain;sd-text-primary` Bulk tracing analysis
:img-bottom: images/bulkaxons.png
:link: tracing-tracking
@@ -67,9 +67,9 @@ Analysis of silicon probe tracks (e.g. Neuropixels)
Analyze and visualize bulk fluorescence tracing data
:::
-:::{grid-item-card} {fas}`brain;sd-text-primary` Cell detection
+:::{grid-item-card} {fas}`brain;sd-text-primary` Cell detection via brainmapper
:img-bottom: images/cellfinder.png
-:link: cellfinder-cli/index
+:link: brainmapper/index
:link-type: doc
Whole brain cell detection and registration
:::
@@ -88,6 +88,5 @@ cellfinder-detection
cellfinder-retraining
silicon-probe-tracking
tracing-tracking
-cellfinder-cli/index
-
+brainmapper/index
```