Skip to content

Commit

Permalink
Merge branch 'dev' into add-lora-scaling-factor
Browse files Browse the repository at this point in the history
  • Loading branch information
caroteu committed Oct 28, 2024
2 parents 7a27ffb + 8801da3 commit 9eb4d0c
Show file tree
Hide file tree
Showing 29 changed files with 2,033 additions and 645 deletions.
5 changes: 1 addition & 4 deletions .github/workflows/build_installers.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,7 @@ jobs:
cd deployment
mamba install -y -c conda-forge constructor
mamba install -y -c conda-forge ruamel.yaml
mamba env create --file=../environment_cpu.yaml -n __MICROSAM_BUILD_ENV__
conda activate __MICROSAM_BUILD_ENV__
# TODO get the current version here and use it for pinning or enable passing this from dispatch
mamba install -y -c conda-forge micro_sam
mamba create -y -c conda-forge -n __MICROSAM_BUILD_ENV__ micro_sam natsort
conda activate base
RUN_SCRIPT: |
Expand Down
6 changes: 5 additions & 1 deletion .github/workflows/test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,9 @@ jobs:
fail-fast: false
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
python-version: ["3.11", "3.12"]
# 3.12 currently not supported due to issues with nifty.
# python-version: ["3.11", "3.12"]
python-version: ["3.11"]

steps:
- name: Checkout
Expand All @@ -30,6 +32,8 @@ jobs:
uses: mamba-org/setup-micromamba@v1
with:
environment-file: environment_cpu.yaml
create-args: >-
python=${{ matrix.python-version }}
# Setup Qt libraries for GUI testing on Linux
- uses: tlambert03/setup-qt-libs@v1
Expand Down
17 changes: 12 additions & 5 deletions doc/annotation_tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,11 +19,18 @@ The annotation tools can be started from the napari plugin menu:

You can find additional information on the annotation tools [in the FAQ section](#usage-question).

HINT: If you would like to start napari to use `micro-sam` from the plugin menu, you must start it by activating the environment where `micro-sam` has been installed using:

```bash
$ mamba activate <ENVIRONMENT_NAME>
$ napari
```


## Annotator 2D

The 2d annotator can be started by
- clicking `Annotator 2d` in the plugin menu.
- clicking `Annotator 2d` in the plugin menu after starting `napari`.
- running `$ micro_sam.annotator_2d` in the command line.
- calling `micro_sam.sam_annotator.annotator_2d` in a python script. Check out [examples/annotator_2d.py](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/annotator_2d.py) for details.

Expand Down Expand Up @@ -55,7 +62,7 @@ Check out [the video tutorial](https://youtu.be/9xjJBg_Bfuc) for an in-depth exp
## Annotator 3D

The 3d annotator can be started by
- clicking `Annotator 3d` in the plugin menu.
- clicking `Annotator 3d` in the plugin menu after starting `napari`.
- running `$ micro_sam.annotator_3d` in the command line.
- calling `micro_sam.sam_annotator.annotator_3d` in a python script. Check out [examples/annotator_3d.py](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/annotator_3d.py) for details.

Expand All @@ -81,7 +88,7 @@ Check out [the video tutorial](https://youtu.be/nqpyNQSyu74) for an in-depth exp
## Annotator Tracking

The tracking annotator can be started by
- clicking `Annotator Tracking` in the plugin menu.
- clicking `Annotator Tracking` in the plugin menu after starting `napari`.
- running `$ micro_sam.annotator_tracking` in the command line.
- calling `micro_sam.sam_annotator.annotator_tracking` in a python script. Check out [examples/annotator_tracking.py](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/annotator_tracking.py) for details.

Expand All @@ -107,7 +114,7 @@ Check out [the video tutorial](https://youtu.be/1gg8OPHqOyc) for an in-depth exp
## Image Series Annotator

The image series annotation tool enables running the [2d annotator](#annotator-2d) or [3d annotator](#annotator-3d) for multiple images that are saved in a folder. This makes it convenient to annotate many images without having to restart the tool for every image. It can be started by
- clicking `Image Series Annotator` in the plugin menu.
- clicking `Image Series Annotator` in the plugin menu after starting `napari`.
- running `$ micro_sam.image_series_annotator` in the command line.
- calling `micro_sam.sam_annotator.image_series_annotator` in a python script. Check out [examples/image_series_annotator.py](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/image_series_annotator.py) for details.

Expand All @@ -126,7 +133,7 @@ Check out [the video tutorial](https://youtu.be/HqRoImdTX3c) for an in-depth exp

## Finetuning UI

We also provide a graphical inferface for fine-tuning models on your own data. It can be started by clicking `Finetuning` in the plugin menu.
We also provide a graphical inferface for fine-tuning models on your own data. It can be started by clicking `Finetuning` in the plugin menu after starting `napari`.

**Note:** if you know a bit of python programming we recommend to use a script for model finetuning instead. This will give you more options to configure the training. See [these instructions](#training-your-own-model) for details.

Expand Down
4 changes: 2 additions & 2 deletions doc/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,13 +146,13 @@ NOTE: It is important to choose the correct model type when you opt for the abov

### 1. I have a microscopy dataset I would like to fine-tune Segment Anything for. Is it possible using `micro_sam`?
Yes, you can fine-tune Segment Anything on your own dataset. Here's how you can do it:
- Check out the [tutorial notebook](https://github.com/computational-cell-analytics/micro-sam/blob/master/notebooks/micro-sam-finetuning.ipynb) on how to fine-tune Segment Anything with our `micro_sam.training` library.
- Check out the [tutorial notebook](https://github.com/computational-cell-analytics/micro-sam/blob/master/notebooks/sam_finetuning.ipynb) on how to fine-tune Segment Anything with our `micro_sam.training` library.
- Or check the [examples](https://github.com/computational-cell-analytics/micro-sam/tree/master/examples/finetuning) for additional scripts that demonstrate finetuning.
- If you are not familiar with coding in python at all then you can also use the [graphical interface for finetuning](finetuning-ui). But we recommend using a script for more flexibility and reproducibility.


### 2. I would like to fine-tune Segment Anything on open-source cloud services (e.g. Kaggle Notebooks), is it possible?
Yes, you can fine-tune Segment Anything on your custom datasets on Kaggle (and [BAND](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#using-micro_sam-on-band)). Check out our [tutorial notebook](https://github.com/computational-cell-analytics/micro-sam/blob/master/notebooks/micro-sam-finetuning.ipynb) for this.
Yes, you can fine-tune Segment Anything on your custom datasets on Kaggle (and [BAND](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#using-micro_sam-on-band)). Check out our [tutorial notebook](https://github.com/computational-cell-analytics/micro-sam/blob/master/notebooks/sam_finetuning.ipynb) for this.


<!---
Expand Down
14 changes: 12 additions & 2 deletions doc/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,13 +22,23 @@ You can follow the instructions [here](https://mamba.readthedocs.io/en/latest/in
$ mamba install -c pytorch -c conda-forge micro_sam
```
or you can create a new environment (here called `micro-sam`) via:

```bash
$ mamba create -c pytorch -c conda-forge -n micro-sam micro_sam
```

if you want to use the GPU you need to install PyTorch from the `pytorch` channel instead of `conda-forge`. For example:

```bash
$ mamba create -c pytorch -c nvidia -c conda-forge -n micro-sam micro_sam pytorch pytorch-cuda=12.1
```

NOTE: If you create a new enviroment (eg. here called `micro-sam`), you must activate the environment using

```bash
$ mamba activate micro-sam
```

You may need to change this command to install the correct CUDA version for your system, see [https://pytorch.org/](https://pytorch.org/) for details.


Expand Down Expand Up @@ -73,8 +83,8 @@ $ pip install -e .
## From installer

We also provide installers for Linux and Windows:
- [Linux](https://owncloud.gwdg.de/index.php/s/nvLwlrHE4DkYcWl)
- [Windows](https://owncloud.gwdg.de/index.php/s/feIs9069IrURmbt)
- [Linux](https://owncloud.gwdg.de/index.php/s/Fyf57WZuiX1NyXs)
- [Windows](https://owncloud.gwdg.de/index.php/s/ZWrY68hl7xE3kGP)
<!---
- [Mac](https://owncloud.gwdg.de/index.php/s/7YupGgACw9SHy2P)
-->
Expand Down
9 changes: 7 additions & 2 deletions doc/start_page.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,17 @@ If you run into any problems or have questions please [open an issue](https://gi
## Quickstart

You can install `micro_sam` via mamba:
```
```bash
$ mamba install -c conda-forge micro_sam
```
We also provide installers for Windows and Linux. For more details on the available installation options, check out [the installation section](#installation).

After installing `micro_sam` you can start napari and select the annotation tool you want to use from `Plugins -> SegmentAnything for Microscopy`. Check out the [quickstart tutorial video](https://youtu.be/gcv0fa84mCc) for a short introduction and [the annotation tool section](#annotation-tools) for details.
After installing `micro_sam`, you can start napari from within your environment using

```bash
$ napari
```
After starting napari, you can select the annotation tool you want to use from `Plugins -> SegmentAnything for Microscopy`. Check out the [quickstart tutorial video](https://youtu.be/gcv0fa84mCc) for a short introduction and [the annotation tool section](#annotation-tools) for details.

The `micro_sam` python library can be imported via

Expand Down
7 changes: 5 additions & 2 deletions environment_cpu.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,19 @@ channels:
- conda-forge
dependencies:
- cpuonly
# This pin is necessary because later nifty versions have import errors on windows.
- nifty =1.2.1=*_4
- imagecodecs
- magicgui
- napari
- napari >=0.5.0
- natsort
- pip
- pooch
- protobuf <5
- pyqt
- python-xxhash
- python-elf >=0.4.8
- pytorch
- pytorch >=2.4
- segment-anything
- torchvision
- torch_em >=0.7.0
Expand Down
7 changes: 5 additions & 2 deletions environment_gpu.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,18 @@ channels:
- conda-forge
dependencies:
- imagecodecs
# This pin is necessary because later nifty versions have import errors on windows.
- nifty =1.2.1=*_4
- magicgui
- napari
- napari >=0.5.0
- natsort
- pip
- pooch
- protobuf <5
- pyqt
- python-xxhash
- python-elf >=0.4.8
- pytorch
- pytorch >=2.4
- pytorch-cuda>=11.7 # you may need to update the cuda version to match your system
- segment-anything
- torchvision
Expand Down
2 changes: 1 addition & 1 deletion micro_sam/__version__.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = "1.0.1"
__version__ = "1.1.1"
63 changes: 37 additions & 26 deletions micro_sam/automatic_segmentation.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@ def get_predictor_and_segmenter(
model_type: str,
checkpoint: Optional[Union[os.PathLike, str]] = None,
device: str = None,
amg: bool = False,
amg: Optional[bool] = None,
is_tiled: bool = False,
amg_kwargs: Dict = {}
**kwargs,
) -> Tuple[util.SamPredictor, Union[AMGBase, InstanceSegmentationWithDecoder]]:
"""Get the Segment Anything model and class for automatic instance segmentation.
Expand All @@ -27,7 +27,10 @@ def get_predictor_and_segmenter(
checkpoint: The filepath to the stored model checkpoints.
device: The torch device.
amg: Whether to perform automatic segmentation in AMG mode.
Otherwise AIS will be used, which requires a special segmentation decoder.
If not specified AIS will be used if it is available and otherwise AMG will be used.
is_tiled: Whether to return segmenter for performing segmentation in tiling window style.
kwargs: Keyword arguments for the automatic instance segmentation class.
Returns:
The Segment Anything model.
Expand All @@ -41,20 +44,22 @@ def get_predictor_and_segmenter(
model_type=model_type, device=device, checkpoint_path=checkpoint, return_state=True,
)

# Get the segmenter for automatic segmentation.
assert isinstance(amg_kwargs, Dict), "Please ensure 'amg_kwargs' gets arguments in a dictionary."
if amg is None:
amg = "decoder_state" not in state
if amg:
decoder = None
else:
if "decoder_state" not in state:
raise RuntimeError("You have passed amg=False, but your model does not contain a segmentation decoder.")
decoder_state = state["decoder_state"]
decoder = get_decoder(image_encoder=predictor.model.image_encoder, decoder_state=decoder_state, device=device)

segmenter = get_amg(
predictor=predictor,
is_tiled=is_tiled,
decoder=get_decoder(
image_encoder=predictor.model.image_encoder,
decoder_state=state["decoder_state"],
device=device
) if "decoder_state" in state and not amg else None,
**amg_kwargs
decoder=decoder,
**kwargs
)

return predictor, segmenter


Expand Down Expand Up @@ -82,7 +87,8 @@ def automatic_instance_segmentation(
embedding_path: The path where the embeddings are cached already / will be saved.
key: The key to the input file. This is needed for container files (eg. hdf5 or zarr)
or to load several images as 3d volume. Provide a glob patterm, eg. "*.tif", for this case.
ndim: The dimensionality of the data.
ndim: The dimensionality of the data. By default the dimensionality of the data will be used.
If you have RGB data you have to specify this explicitly, e.g. pass ndim=2 for 2d segmentation of RGB.
tile_shape: Shape of the tiles for tiled prediction. By default prediction is run without tiling.
halo: Overlap of the tiles for tiled prediction.
verbose: Verbosity flag.
Expand All @@ -97,21 +103,12 @@ def automatic_instance_segmentation(
else:
image_data = util.load_image_data(input_path, key)

if ndim == 3 or image_data.ndim == 3:
if image_data.ndim != 3:
raise ValueError(f"The inputs do not correspond to three dimensional inputs: '{image_data.ndim}'")
ndim = image_data.ndim if ndim is None else ndim

if ndim == 2:
if (image_data.ndim != 2) and (image_data.ndim != 3 and image_data.shape[-1] != 3):
raise ValueError(f"The inputs does not match the shape expectation of 2d inputs: {image_data.shape}")

instances = automatic_3d_segmentation(
volume=image_data,
predictor=predictor,
segmentor=segmenter,
embedding_path=embedding_path,
tile_shape=tile_shape,
halo=halo,
verbose=verbose,
**generate_kwargs
)
else:
# Precompute the image embeddings.
image_embeddings = util.precompute_image_embeddings(
predictor=predictor,
Expand All @@ -137,6 +134,20 @@ def automatic_instance_segmentation(
instances = np.zeros(this_shape, dtype="uint32")
else:
instances = mask_data_to_segmentation(masks, with_background=True, min_object_size=0)
else:
if (image_data.ndim != 3) and (image_data.ndim != 4 and image_data.shape[-1] != 3):
raise ValueError(f"The inputs does not match the shape expectation of 3d inputs: {image_data.shape}")

instances = automatic_3d_segmentation(
volume=image_data,
predictor=predictor,
segmentor=segmenter,
embedding_path=embedding_path,
tile_shape=tile_shape,
halo=halo,
verbose=verbose,
**generate_kwargs
)

if output_path is not None:
# Save the instance segmentation
Expand Down
30 changes: 22 additions & 8 deletions micro_sam/evaluation/inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -550,9 +550,13 @@ def run_amg(
iou_thresh_values: Optional[List[float]] = None,
stability_score_values: Optional[List[float]] = None,
peft_kwargs: Optional[Dict] = None,
cache_embeddings: bool = False
) -> str:
embedding_folder = os.path.join(experiment_folder, "embeddings") # where the precomputed embeddings are saved
os.makedirs(embedding_folder, exist_ok=True)
if cache_embeddings:
embedding_folder = os.path.join(experiment_folder, "embeddings") # where the precomputed embeddings are saved
os.makedirs(embedding_folder, exist_ok=True)
else:
embedding_folder = None

predictor = util.get_sam_model(model_type=model_type, checkpoint_path=checkpoint, peft_kwargs=peft_kwargs)
amg = AutomaticMaskGenerator(predictor)
Expand All @@ -572,9 +576,15 @@ def run_amg(
)

instance_segmentation.run_instance_segmentation_grid_search_and_inference(
amg, grid_search_values,
val_image_paths, val_gt_paths, test_image_paths,
embedding_folder, prediction_folder, gs_result_folder,
segmenter=amg,
grid_search_values=grid_search_values,
val_image_paths=val_image_paths,
val_gt_paths=val_gt_paths,
test_image_paths=test_image_paths,
embedding_dir=embedding_folder,
prediction_dir=prediction_folder,
result_dir=gs_result_folder,
experiment_folder=experiment_folder,
)
return prediction_folder

Expand All @@ -592,9 +602,13 @@ def run_instance_segmentation_with_decoder(
val_gt_paths: List[Union[str, os.PathLike]],
test_image_paths: List[Union[str, os.PathLike]],
peft_kwargs: Optional[Dict] = None,
cache_embeddings: bool = False,
) -> str:
embedding_folder = os.path.join(experiment_folder, "embeddings") # where the precomputed embeddings are saved
os.makedirs(embedding_folder, exist_ok=True)
if cache_embeddings:
embedding_folder = os.path.join(experiment_folder, "embeddings") # where the precomputed embeddings are saved
os.makedirs(embedding_folder, exist_ok=True)
else:
embedding_folder = None

predictor, decoder = get_predictor_and_decoder(
model_type=model_type, checkpoint_path=checkpoint, peft_kwargs=peft_kwargs,
Expand All @@ -616,6 +630,6 @@ def run_instance_segmentation_with_decoder(
segmenter, grid_search_values,
val_image_paths, val_gt_paths, test_image_paths,
embedding_dir=embedding_folder, prediction_dir=prediction_folder,
result_dir=gs_result_folder,
result_dir=gs_result_folder, experiment_folder=experiment_folder,
)
return prediction_folder
Loading

0 comments on commit 9eb4d0c

Please sign in to comment.