diff --git a/docs/source/blog/bg-space-rename.md b/docs/source/blog/bg-space-rename.md
new file mode 100644
index 00000000..98d904a0
--- /dev/null
+++ b/docs/source/blog/bg-space-rename.md
@@ -0,0 +1,36 @@
+---
+blogpost: true
+date: Jan 24, 2024
+author: Will Graham
+location: London, England
+category: brainglobe
+language: English
+---
+
+# `bg-space` has been renamed
+
+The "bg" prefix that a number of BrainGlobe tools carry is not very distinctive nor informative, so we are rolling out minor name changes to a lot of our packages that contain this prefix.
+We are also taking this opportunity to bring these tools into line with our developer guidelines for automatic deployment, tooling, and testing.
+
+The first on our list is `bg-space`, which will be changing name to the more explicit `brainglobe-space`.
+Beyond this name change, there will be no functionality changes to the package, but several other tools will switch to depending on `brainglobe-space` from now on, and `bg-space` will be archived and not receive any future updates.
+
+## What do I need to do?
+
+Users who installed BrainGlobe through it's single ("meta") package install with `pip install brainglobe` will just need to update the package by running
+
+```bash
+pip install brainglobe --update
+```
+
+in your environment, which will fetch the new version of all the affected packages.
+You can use `pip show brainglobe` to check your version has updated - you should find you now have `brainglobe` version as 1.0.1 or higher.
+
+If you are manually managing your BrainGlobe tools, you will need to uninstall `bg-space` and install `brainglobe-space` in its place.
+You'll also need to update the following packages, which have dropped their `bg-space` dependency and now depend on `brainglobe-space`:
+
+- `bg-atlasapi`, version 1.0.3 or newer.
+- `brainglobe-napari-io`, version 0.3.3 or newer.
+- `morphapi`, version 0.2.2 or newer.
+- `brainrender`, version 2.1.5 or newer.
+- `brainreg`, version 1.0.4 or newer.
diff --git a/docs/source/community/developers/repositories/brainglobe-meta/brainglobe-dependencies.svg b/docs/source/community/developers/repositories/brainglobe-meta/brainglobe-dependencies.svg
index 0fb14e18..a2136edb 100644
--- a/docs/source/community/developers/repositories/brainglobe-meta/brainglobe-dependencies.svg
+++ b/docs/source/community/developers/repositories/brainglobe-meta/brainglobe-dependencies.svg
@@ -1,1573 +1,4 @@
-
-
+
+
+
+
\ No newline at end of file
diff --git a/docs/source/community/developers/repositories/brainglobe-meta/index.md b/docs/source/community/developers/repositories/brainglobe-meta/index.md
index 33d4d3ed..e2e1dc37 100644
--- a/docs/source/community/developers/repositories/brainglobe-meta/index.md
+++ b/docs/source/community/developers/repositories/brainglobe-meta/index.md
@@ -99,3 +99,5 @@ As a general rule of thumb when editing the dependency chart, packages/tools sho
Packages on the top level depend on no other BrainGlobe tools.
Packages on the level below depend on at least one package from the level above, and any number of packages from the level(s) further up than that.
This illustrates both how BrainGlobe tools build on each other, as well as which tools may be affected by new releases of others.
+
+Latest rendering of the dependency tree: `v1.0.1`.
diff --git a/docs/source/documentation/bg-space/index.md b/docs/source/documentation/brainglobe-space/index.md
similarity index 99%
rename from docs/source/documentation/bg-space/index.md
rename to docs/source/documentation/brainglobe-space/index.md
index 443b55af..735cfd22 100644
--- a/docs/source/documentation/bg-space/index.md
+++ b/docs/source/documentation/brainglobe-space/index.md
@@ -28,7 +28,7 @@ target_origin = ("Inferior", "Posterior", "Right")
A stack can be then easily transformed from the source to the target space:
```python
-import bg_space as bg
+import brainglobe_space as bg
import numpy as np
stack = np.random.rand(3, 2, 4)
diff --git a/docs/source/documentation/bg-space/projections.png b/docs/source/documentation/brainglobe-space/projections.png
similarity index 100%
rename from docs/source/documentation/bg-space/projections.png
rename to docs/source/documentation/brainglobe-space/projections.png
diff --git a/docs/source/documentation/brainreg/user-guide/brainreg-napari.md b/docs/source/documentation/brainreg/user-guide/brainreg-napari.md
index f4b54bcd..c7e0bbb1 100644
--- a/docs/source/documentation/brainreg/user-guide/brainreg-napari.md
+++ b/docs/source/documentation/brainreg/user-guide/brainreg-napari.md
@@ -1,60 +1,60 @@
# Napari plugin
-How to use the brainreg napari plugin
+
+How to use `brainreg`'s napari plugin.
## Getting started
-To register your data, you will need a whole-brain image, i.e., not a part of a brain, and not some individual 2D
-sections. The format doesn't matter, as long as it can be loaded into napari, and the orientation etc. is dealt with by brainreg.
+To register your data, you will need a whole-brain image, i.e., not a part of a brain, and not some individual 2D sections.
+The format doesn't matter, as long as it can be loaded into napari.
+The orientation, etc, is dealt with by `brainreg`.
## Loading data
-Loading your data into napari will vary depending on the data type, but with most types, you should be able to drag
-and drop your data into the main napari window.
+Loading your data into napari will vary depending on the data type, but with most types, you should be able to drag and drop your data into the main napari window.
:::{note}
-If you are having trouble loading your data into napari, first check the [napari hub](https://www.napari-hub.org/) to
-see if there's a plugin to help. If that fails, go ahead and ask the nice people at the
-[image.sc](https://forum.image.sc/tag/napari) forum to see if anybody can help.
+If you are having trouble loading your data into napari, first check the [napari hub](https://www.napari-hub.org/) to see if there's a plugin to help.
+If that fails, go ahead and ask the nice people on the [image.sc](https://forum.image.sc/tag/napari) forum to see if anybody can help.
:::
## Starting the plugin
-Click `Plugins` at the top of the main napari window, and then click `brainreg-register: Atlas registration`. A new
-docked widget will appear in your napari window.
+Click `Plugins` at the top of the main napari window, and then click `brainreg-register: Atlas registration`.
+A new docked widget will appear in your napari window.
## Setting up registration
-Choose the napari image layer you wish to be registered from `Image layer`, along with the atlas you want to use from
-`Atlas`. You must also set the voxel sizes in the axial (z) and in-plane (x, y) dimensions, along with the data
-orientation. The orientation is defined by three letters, based on [bg-space](https://github.com/brainglobe/bg-space),
-e.g. `psl`. For more details on this, see the outline
-[here](/documentation/setting-up/image-definition). Lastly, set an `Output directory`
-(where you want to save the data).
+Choose the napari image layer you wish to be registered from `Image layer`, along with the atlas you want to use from `Atlas`.
+You must also set the voxel sizes in the axial (z) and in-plane (x, y) dimensions, along with the data orientation.
+The orientation is defined by three letters, based on [brainglobe-space](https://github.com/brainglobe/brainglobe-space), e.g. `psl`.
+For more details on this, see the outline [here](/documentation/setting-up/image-definition).
+Lastly, set an `output directory` (where you want to save the data).
### Registering additional channels
-`brainreg` will use a single channel for registration. This is typically an image without much signal,
-such as an image of only autofluroescence. Images of brain-wide stains such as DAPI can also work well.
-To register any additional channels, make sure these are selected in the list of layers on the left-hand side of the
-napari window. The registration will be performed on the image chosen as `Image layer`, but the transformations will be
-applied to these other channels. This is useful if you want to later analyse multiple channels, or if the channel of interest
-registers poorly due to high signal levels from staining etc.
+`brainreg` will use a single channel for registration.
+This is typically an image without much signal, such as an image of only autofluroescence.
+Images of brain-wide stains such as DAPI can also work well.
+
+To register any additional channels, make sure these are selected in the list of layers on the left-hand side of the napari window.
+The registration will be performed on the image chosen as `Image layer`, but the transformations will be applied to these other channels.
+This is useful if you want to later analyse multiple channels, or if the channel of interest registers poorly due to high signal levels from staining, etc.
:::{caution}
-Make sure that the image layer you are registering is not selected in the list of napari image layers on the left-hand
-side, otherwise it will be registered twice!
+Make sure that the image layer you are registering is not selected in the list of napari image layers on the left-hand side, otherwise it will be registered twice!
:::
## Setting additional parameters
-There are many parameters that can be set to improve registration performance. For more details on these, see
-the documentation [here](./parameters).
+
+There are many parameters that can be set to improve registration performance.
+For more details on these, see the documentation [here](./parameters).
## Running brainreg
-You can then click `Run`, and the registration will start. Lots of stuff will get printed to the console as
-brainreg runs, and when it's done (it should only take a minute or so), you will see something like:
+You can then click `Run`, and the registration will start.
+Lots of stuff will get printed to the console as brainreg runs, and when it's done (it should only take a minute or so), you will see something like:
-```
+```bash
INFO - MainProcess cli.py:230 - Finished. Total time taken: 0:00:29.15
```
@@ -62,9 +62,9 @@ This means that the registration is complete, but you should see the results app
Once the registration is complete, some new image layers will appear:
-* Atlas annotations - this the annotations image from the atlas (where each brain region has a unique value) warped to the data
-* Boundary image - this is a binary image, showing the boundaries between atlas regions.
+- Atlas annotations - this the annotations image from the atlas (where each brain region has a unique value) warped to the data
+- Boundary image - this is a binary image, showing the boundaries between atlas regions.
-These files are not the only ones created; they will all be saved in the output directory.
-These can be loaded into napari at any time, see the [main visualisation page](visualisation).
+These files are not the only ones created; they will all be saved in the output directory.
+These can be loaded into napari at any time, see the [main visualisation page](visualisation).
For more details of the output files created, please see [output files](output-files).
diff --git a/docs/source/documentation/brainreg/user-guide/checking-orientation.md b/docs/source/documentation/brainreg/user-guide/checking-orientation.md
index ffc8da66..37d768a6 100644
--- a/docs/source/documentation/brainreg/user-guide/checking-orientation.md
+++ b/docs/source/documentation/brainreg/user-guide/checking-orientation.md
@@ -4,7 +4,7 @@ To ensure that the orientation is set correctly, `napari-brainreg` comes with a
orientation (thanks to [Jules Scholler](https://github.com/JulesScholler)!).
Once you've loaded your data, fill in the input orientation in the GUI based on the
-[bg-space definition](/documentation/setting-up/image-definition) and click `Check orientation`.
+[brainglobe-space definition](/documentation/setting-up/image-definition) and click `Check orientation`.
This will generate a number of new images that are displayed to the user. The top row of displayed images are the
projections of the reference atlas. The bottom row are the projections of the aligned input data. If the two rows are
similarly oriented, the orientation is correct. If not, change the orientation and try again.
diff --git a/docs/source/documentation/brainrender/usage/using-your-data/index.md b/docs/source/documentation/brainrender/usage/using-your-data/index.md
index 55743d58..2d7b3734 100644
--- a/docs/source/documentation/brainrender/usage/using-your-data/index.md
+++ b/docs/source/documentation/brainrender/usage/using-your-data/index.md
@@ -18,7 +18,7 @@ some of these are focused on showing how to load and use your data in brainrende
In order to visualize your data in brainrender, it has to be in register with the axes system in brainrender. If
you used tools like [brainreg](/documentation/brainreg/index) your data will already be registered, and you can skip
this step. If not, you will have to transform your data so that its axes match brainrender's.
-BrainGlobe includes [bg-space](/documentation/bg-space/index), software that aims at facilitating the operation of
+BrainGlobe includes [brainglobe-space](/documentation/brainglobe-space/index), software that aims at facilitating the operation of
swapping axes around, which can get confusing rapidly otherwise.
Check [Registering data](registering-data) for more details.
diff --git a/docs/source/documentation/brainrender/usage/using-your-data/registering-data.md b/docs/source/documentation/brainrender/usage/using-your-data/registering-data.md
index 59854f93..8ca87b6d 100644
--- a/docs/source/documentation/brainrender/usage/using-your-data/registering-data.md
+++ b/docs/source/documentation/brainrender/usage/using-your-data/registering-data.md
@@ -16,7 +16,7 @@ to atlases, this is still a not-trivial steps in the analysis of any anatomical
## Aligning to atlas's space
-Brainglobe's Atlas API relies on [bg-space](/documentation/bg-space/index) for transforming data (e.g. image stacks)
+Brainglobe's Atlas API relies on [brainglobe-space](/documentation/brainglobe-space/index) for transforming data (e.g. image stacks)
so that they are all oriented the same way. Bg-space provides a convenient naming convection to define the orientation
of your data based on where the origin is and the direction that the three main axes (first three dimensions of your
image data) point towards.
@@ -38,7 +38,7 @@ shape: (528, 320, 456)
"""
```
-Check the [bg-space documentation](/documentation/bg-space/index)) for more details.
+Check the [brainglobe-space documentation](/documentation/brainglobe-space/index)) for more details.
## Matching resolution and offset
@@ -48,7 +48,7 @@ different resolution or to a different offset. Resolution refers to how many mic
microns) the side of the voxels in your image correspond to. Offset refers to the fact that the origin of your
image might be offset from the origin of the atlas space (e.g. if you didn't image the entire brain).
-Here too bg-space provides tools to address mismatches in these two aspects.
+Here too brainglobe-space provides tools to address mismatches in these two aspects.
diff --git a/docs/source/documentation/index.md b/docs/source/documentation/index.md
index f0e87f32..b7e10651 100644
--- a/docs/source/documentation/index.md
+++ b/docs/source/documentation/index.md
@@ -56,7 +56,7 @@ Once you have installed `brainglobe`, or [installed an individual tool](#install
:maxdepth: 1
setting-up/index
bg-atlasapi/index
-bg-space/index
+brainglobe-space/index
brainreg/index
brainglobe-segmentation/index
brainglobe-workflows/index
diff --git a/docs/source/documentation/setting-up/image-definition.md b/docs/source/documentation/setting-up/image-definition.md
index 992ccaff..60445c4c 100644
--- a/docs/source/documentation/setting-up/image-definition.md
+++ b/docs/source/documentation/setting-up/image-definition.md
@@ -4,7 +4,7 @@ In some BrainGlobe tools, you need to specify the orientation and resolution of
## Orientation
When you need to specify the orientation of your data, you will usually need to enter a string in the
-[bg-space](https://github.com/brainglobe/bg-space) "initials" form, to describe the origin voxel.
+[brainglobe-space](https://github.com/brainglobe/brainglobe-space) "initials" form, to describe the origin voxel.
When you work with a stack, the origin is the upper left corner when you show the first element `stack[0, :, :]` with
matplotlib or when you open the stack with ImageJ. The first dimension is the one that you are slicing, the second is
diff --git a/docs/source/tutorials/brainmapper/setting-up.md b/docs/source/tutorials/brainmapper/setting-up.md
index 00908b5a..0a946766 100644
--- a/docs/source/tutorials/brainmapper/setting-up.md
+++ b/docs/source/tutorials/brainmapper/setting-up.md
@@ -28,7 +28,7 @@ To run `brainmapper`, you need to know:
- Which image is the primary signal channel (the one with the labelled cells) and which is the secondary autofluorescence channel. In this case, `test_brain/ch00` is the signal channel and `test_brain/ch01` is the autofluroescence channel.
- Where you want to save the output data (we'll just save it into a directory called `brainmapper_output` in the same directory as the `test_brain`).
- The pixel sizes of your data in microns (see [Image definition](/documentation/setting-up/image-definition) for details). In this case, our data is 2μm per pixel in the coronal plane and the spacing of each plane is 5μm.
-- The orientation of your data. For atlas registration (using [brainreg](/documentation/brainreg/index)) the software needs to know how you acquired your data (coronal, sagittal etc.). For this `brainmapper` uses [bg-space](/documentation/bg-space/index). Full details on how to enter your data orientation can also be found in the [Image definition](/documentation/setting-up/image-definition) section. For this tutorial, the orientation is `psl`, which means that the data origin is the most **p**osterior, **s**uperior, **l**eft voxel.
+- The orientation of your data. For atlas registration (using [brainreg](/documentation/brainreg/index)) the software needs to know how you acquired your data (coronal, sagittal etc.). For this `brainmapper` uses [brainglobe-space](/documentation/brainglobe-space/index). Full details on how to enter your data orientation can also be found in the [Image definition](/documentation/setting-up/image-definition) section. For this tutorial, the orientation is `psl`, which means that the data origin is the most **p**osterior, **s**uperior, **l**eft voxel.
- Which atlas you want to use (you can see which are available by running `brainglobe list`). In this case, we want to use a mouse atlas (as that's what our data is), and we'll use the 10μm version of the [Allen Mouse Brain Atlas](https://mouse.brain-map.org/static/atlas).
Now you're ready to start [Running brainmapper](running-brainmapper).