Skip to content

Commit

Permalink
deploy: 070bcc2
Browse files Browse the repository at this point in the history
  • Loading branch information
AlbertDominguez committed Oct 8, 2024
0 parents commit 7d644bb
Show file tree
Hide file tree
Showing 230 changed files with 8,342 additions and 0 deletions.
4 changes: 4 additions & 0 deletions .buildinfo
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 2b8e367c394aa09fb8e91de9785b4bf4
tags: 645f666f9bcd5a90fca523b33c5a78b7
Binary file added .doctrees/api.doctree
Binary file not shown.
Binary file added .doctrees/cli.doctree
Binary file not shown.
Binary file added .doctrees/environment.pickle
Binary file not shown.
Binary file added .doctrees/finetune.doctree
Binary file not shown.
Binary file added .doctrees/index.doctree
Binary file not shown.
Binary file added .doctrees/napari.doctree
Binary file not shown.
Binary file added .doctrees/train.doctree
Binary file not shown.
Empty file added .nojekyll
Empty file.
Binary file added _images/spotiflow_napari_gui.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/spotiflow_napari_preds.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
22 changes: 22 additions & 0 deletions _sources/api.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
API Reference
-------------

.. autoclass:: spotiflow.model.spotiflow.Spotiflow
:members: from_pretrained, from_folder, predict, fit, save, load, optimize_threshold

.. autoclass:: spotiflow.model.config.SpotiflowModelConfig
:members:

.. autoclass:: spotiflow.model.config.SpotiflowTrainingConfig
:members:

.. autoclass:: spotiflow.data.spots.SpotsDataset
:members:

.. automethod:: __init__

.. automodule:: spotiflow.utils
:members: get_data, read_coords_csv, write_coords_csv, normalize

.. automodule:: spotiflow.sample_data
:members:
12 changes: 12 additions & 0 deletions _sources/cli.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
Inference via CLI
-----------------

Command Line Interface (CLI)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can use the CLI to run inference on an image or folder containing several images. To do that, you can use the following command

.. code-block:: console
$ spotiflow predict --input PATH
where ``PATH`` can be either an image or a folder. By default, the command will use the ``general`` pretrained model. You can specify a different model by using the ``--pretrained-model`` flag. Moreover, spots are saved to a subfolder ``spotiflow_results`` created inside the input folder (this can be changed with the ``--out-dir`` flag). For more information, please refer to the help message of the CLI (``spotiflow-predict -h``).
38 changes: 38 additions & 0 deletions _sources/finetune.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
Fine-tuning a Spotiflow model on a custom dataset
-------------------------------------------------

Data format
^^^^^^^^^^^

See :ref:`train:Data format`.

Fine-tuning
^^^^^^^^^^^

Finetuning a pre-trained model on a custom dataset is very easy. You can load the model very similarly to what you would normally do to predict on new images (you only need to add one extra parameter!):

.. code-block:: python
from spotiflow.model import Spotiflow
from spotiflow.utils import get_data
# Get the data
train_imgs, train_spots, val_imgs, val_spots = get_data("/path/to/spots_data")
# Initialize the model
model = Spotiflow.from_pretrained(
"general",
inference_mode=False,
)
# Train and save the model
model.fit(
train_imgs,
train_spots,
val_imgs,
val_spots,
save_dir="/my/trained/model",
)
Of course, you can also fine-tune from a model you have trained before. In that case, use the ``from_folder()`` method instead of ``from_pretrained()`` (see :ref:`index:Predicting spots in an image`).
All the information about training customization from :ref:`train:Customizing the training` applies here as well. However, note that you cannot change the model architecture when fine-tuning!
101 changes: 101 additions & 0 deletions _sources/index.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
:hero: Spotiflow: accurate and robust spot detection for fluorescence microscopy

=========
Spotiflow
=========

Spotiflow is a learning-based subpixel-accurate spot detection method for 2D and 3D fluorescence microscopy. It is primarily developed for spatial transcriptomics workflows that require transcript detection in large, multiplexed FISH-images, although it can also be used to detect spot-like structures in general fluorescence microscopy images and volumes. For more information, please refer to our `paper <https://doi.org/10.1101/2024.02.01.578426/>`__.

Getting Started
---------------

Installation
~~~~~~~~~~~~


First, create and activate a fresh ``conda`` environment (we currently support Python 3.9 to 3.12). If you don't have ``conda`` installed, we recommend using `miniforge <https://github.com/conda-forge/miniforge>`__.

.. code-block:: console
$ conda create -n spotiflow python=3.12
$ conda activate spotiflow
**Note (for MacOS users):** if using MacOS, there is a known bug causing the installation of PyTorch with conda to sometimes break OpenMP. You can avoid installing PyTorch with ``conda`` and let install it automatically via pip instead.
Then, install Pytorch using ``conda``/ ``mamba``. Please follow the `official instructions for your system <https://pytorch.org/get-started/locally>`__.

As an example, for a Linux system with CUDA (note that you should change the CUDA version to match the one installed on your system):

.. code-block:: console
$ conda install pytorch torchvision pytorch-cuda=12.1 -c pytorch -c nvidia
**Note (for Windows users):** if using Windows, if using Windows, please install the latest `Build Tools for Visual Studio <https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2022>`__ (make sure to select the C++ build tools during installation) before proceeding to install Spotiflow.

Finally, install ``spotiflow`` using ``pip``:

.. code-block:: console
$ pip install spotiflow
Predicting spots in an image
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Python API
^^^^^^^^^^

The snippet below shows how to retrieve the spots from an image using one of the pretrained models:

.. code-block:: python
from skimage.io import imread
from spotiflow.model import Spotiflow
from spotiflow.utils import write_coords_csv
# Load the desired image
img = imread("/path/to/your/image")
# Load a pretrained model
model = Spotiflow.from_pretrained("general")
# Predict spots
spots, details = model.predict(img) # predict expects a numpy array
# spots is a numpy array with shape (n_spots, 2)
# details contains additional information about the prediction, like the predicted heatmap, the probability per spot, the flow field, etc.
# Save the results to a CSV file
write_coords_csv(spots, "/path/to/save/spots.csv")
If a custom model is used, simply change the model loadings step to:

.. code-block:: python
# Load a custom model
model = Spotiflow.from_folder("/path/to/model")
Command Line Interface (CLI)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can use the CLI to run inference on an image or folder containing several images. To do that, you can use the following command

.. code-block:: console
$ spotiflow predict --input PATH
where ``PATH`` can be either an image or a folder. By default, the command will use the ``general`` pretrained model. You can specify a different model by using the ``--pretrained-model`` flag. Moreover, spots are saved to a subfolder ``spotiflow_results`` created inside the input folder (this can be changed with the ``--out-dir`` flag). For more information, please refer to the help message of the CLI (``spotiflow-predict -h``).

Napari plugin
^^^^^^^^^^^^^
Spotiflow also can be run easily in a graphical user interface as a `napari <https://napari.org/>`__ plugin. See :ref:`napari:Predicting spots using the napari plugin` for more information.

Contents
--------

.. toctree::
:maxdepth: 2

napari
cli
train
finetune
api
26 changes: 26 additions & 0 deletions _sources/napari.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
Predicting spots using the napari plugin
----------------------------------------

The napari plugin can be used to predict spots in a napari viewer. First, you must install it in the environment containing Spotiflow:

.. code-block:: console
(spotiflow) $ pip install napari-spotiflow
The plugin will then be available in the napari GUI under the name "Spotiflow widget". This is how the GUI looks like:

.. image:: ./_static/spotiflow_napari_gui.png
:width: 700
:align: center

The plugin allows running on two modes: for images (``2D``) and volumes (``3D``), which can be toggled using the corresponding buttons in the GUI. You can also run on movies by setting the appropriate axis order (should be leading with a `T`).

Upon pressing the button ``Run``, The plugin will create a ``Points`` layer containing the predicted spots:

.. image:: ./_static/spotiflow_napari_preds.png
:width: 700
:align: center

If the option ``Show CNN output`` is checked, the plugin will also create two ``Image`` layers containing the heatmap output of the CNN as well as the stereographic flow.

Finally, the plugin includes two sample 2D images (HybISS and Terra) as well as a synthetic 3D volume. These samples can be loaded from the ``File`` menu (``File -> Open sample -> napari-spotiflow``). You can try the plugin with these samples to get a better idea of how it works!
132 changes: 132 additions & 0 deletions _sources/train.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@
Training a Spotiflow model on a custom dataset
----------------------------------------------

Data format
^^^^^^^^^^^

First of all, make sure that the data is organized in the following format:

::

spots_data
├── train
│ ├── img_001.csv
│ ├── img_001.tif
| ...
│ ├── img_XYZ.csv
| └── img_XYZ.tif
└── val
├── val_img_001.csv
├── val_img_001.tif
...
├── val_img_XYZ.csv
└── val_img_XYZ.tif

The actual naming of the files is not important, but the ``.csv`` and ``.tif`` files corresponding to the same image **must** have the same name! The ``.csv`` files must contain the spot coordinates in the following format:

.. code-block::
y,x
42.3,24.24
252.99, 307.97
...
The column names can also be `axis-0` (instead of `y`) and `axis-1` instead of `x`. For the 3D case, the format is similar but with an additional column corresponding to the `z` coordinate:

.. code-block::
z,y,x
12.4,42.3,24.24
61.2,252.99, 307.97
...
In this case, you can also use `axis-0`, `axis-1`, and `axis-2` instead of `z`, `y`, and `x`, respectively.


Basic training
^^^^^^^^^^^^^^

You can easily train a model using the default settings as follows and save it to the directory `/my/trained/model`:

.. code-block:: python
from spotiflow.model import Spotiflow
from spotiflow.utils import get_data
# Get the data
train_imgs, train_spots, val_imgs, val_spots = get_data("/path/to/spots_data")
# Initialize the model
model = Spotiflow()
# Train and save the model
model.fit(
train_imgs,
train_spots,
val_imgs,
val_spots,
save_dir="/my/trained/model",
)
You can then load it by simply calling:

.. code-block:: python
model = Spotiflow.from_folder("/my/trained/model")
In the 3D case, you should initialize a :py:mod:`spotiflow.model.config.SpotiflowModelConfig` object and pass it to the `Spotiflow` constructor with the appropriate parameter set (see other options for the configuration at the end of the section):

.. code-block:: python
# Same imports as before
from spotiflow.model import SpotiflowModelConfig
# Create the model config
model_config = SpotiflowModelConfig(
is_3d=True,
grid=2, # subsampling factor for prediction
# you can pass other arguments here
)
model = Spotiflow(model_config)
# Train and save the model as before
Customizing the training
^^^^^^^^^^^^^^^^^^^^^^^^

You can also pass other parameters relevant for training to the `fit` method. For example, you can change the number of epochs, the batch size, the learning rate, etc. You can do that using the `train_config` parameter. For more information on the arguments allowed, see the documentation of :py:func:`spotiflow.model.spotiflow.Spotiflow.fit` method as well as :py:mod:`spotiflow.model.config.SpotiflowTrainingConfig`. As an example, let's change the number of epochs and the learning rate:

.. code-block:: python
train_config = {
"num_epochs": 100,
"learning_rate": 0.001,
"smart_crop": True,
# other parameters
}
model.fit(
train_imgs,
train_spots,
val_imgs,
val_spots,
save_dir="/my/trained/model",
train_config=train_config,
# other parameters
)
In order to change the model architecture (`e.g.` number of input channels, number of layers, variance for the heatmap generation, etc.), you can create a :py:mod:`spotiflow.model.config.SpotiflowModelConfig` object and populate it accordingly. Then you can pass it to the `Spotiflow` constructor (note that this is necessary for 3D). For example, if our image is RGB and we need the network to use 3 input channels, we can do the following:

.. code-block:: python
from spotiflow.model import SpotiflowModelConfig
# Create the model config
model_config = SpotiflowModelConfig(
in_channels=3,
# you can pass other arguments here
)
model = Spotiflow(model_config)
Loading

0 comments on commit 7d644bb

Please sign in to comment.