diff --git a/micro_sam.html b/micro_sam.html index 61b36aa8..06268058 100644 --- a/micro_sam.html +++ b/micro_sam.html @@ -42,8 +42,6 @@
You can find more information on the installation and how to troubleshoot it in the FAQ section.
+mamba is a drop-in replacement for conda, but much faster. @@ -250,8 +256,8 @@
We also provide installers for Linux and Windows:
The annotation tools can be started from the napari plugin menu:
+You can find additional information on the annotation tools in the FAQ section.
+The 2d annotator can be started by
@@ -459,36 +467,6 @@The Configuration
option allows you to choose the hardware configuration for training. We try to automatically select the correct setting for your system, but it can also be changed. Please refer to the tooltips for the other parameters.
tile_shape
, which determines the size of the inner tile and halo
, which determines the size of the additional overlap.
-micro_sam
GUI you can specify the values for the halo
and tile_shape
via the Tile X
, Tile Y
, Halo X
and Halo Y
by clicking on Embeddings Settings
.tile_shape=(1024, 1024), halo=(128, 128)
. See also the wholeslide_annotator example.--tile_shape 1024 1024 --halo 128 128
embedding_path
(-e
in the CLI).halo
such that it is larger than half of the maximal radius of the objects your segmenting.micro_sam.precompute_embeddings
for this (it is installed with the rest of the applications). You can specify the location of the precomputed embeddings via the embedding_path
argument.
-embeddings save path
. Existing embeddings are loaded from the specified path or embeddings are computed and the path is used to save them. model_type
argument and either set it to vit_b
or to vit_l
(default is vit_h
). However, this may lead to worse results.committed_objects
/ committed_tracks
layer to correct segmentations you obtained from another tool (e.g. CellPose) or to save intermediate annotation results. The results can be saved via File -> Save Selected Layer(s) ...
in the napari menu (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the segmentation_result
(2d and 3d segmentation) or tracking_result
(tracking) argument.The python library can be imported via
@@ -534,7 +512,7 @@We also support training an additional decoder for automatic instance segmentation. This yields better results than the automatic mask generation of segment anything and is significantly faster. The notebook explains how to activate training it together with the rest of SAM and how to then use it.
-More advanced examples, including quantitative and qualitative evaluation, of finetuned models can be found in finetuning, which contains the code for training and evaluating our models.
+More advanced examples, including quantitative and qualitative evaluation, of finetuned models can be found in finetuning, which contains the code for training and evaluating our models. You can find further information on model training in the FAQ section.
vit_b
: Default Segment Anything model with vit-b backbone.vit_t
: Segment Anything model with vit-tiny backbone. From the Mobile SAM publication. vit_l_lm
: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-l backbone. (zenodo, bioimage.io)vit_b_lm
: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-b backbone. (zenodo, bioimage.io)vit_b_lm
: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-b backbone. (zenodo, diplomatic-bug on bioimage.io)vit_t_lm
: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-t backbone. (zenodo, bioimage.io)vit_l_em_organelles
: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with vit-l backbone. (zenodo, bioimage.io)vit_b_em_organelles
: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with vit-b backbone. (zenodo, bioimage.io)We do not recommend to use these models since our new models improve upon them significantly. But we provide the links here in case they are needed to reproduce older segmentation workflows.
+Here we provide frequently asked questions and common issues.
+If you encounter a problem or question not addressed here feel free to open an issue or to ask your question on image.sc with the tag micro-sam
.
micro_sam
?The installation for micro_sam
is supported in three ways: from mamba (recommended), from source and from installers. Check out our tutorial video to get started with micro_sam
, briefly walking you through the installation process and how to start the tool.
micro_sam
using the installer, I am getting some errors.The installer should work out-of-the-box on Windows and Linux platforms. Please open an issue to report the error you encounter.
+ +++ +NOTE: The installers enable using
+micro_sam
without mamba or conda. However, we recommend the installation from mamba / from source to use all its features seamlessly. Specifically, the installers currently only support the CPU and won't enable you to use the GPU (if you have one).
micro_sam
?From our experience, the micro_sam
annotation tools work seamlessly on most laptop or workstation CPUs and with > 8GB RAM.
+You might encounter some slowness for $leq$ 8GB RAM. The resources micro_sam
's annotation tools have been tested on are:
Having a GPU will significantly speed up the annotation tools and especially the model finetuning.
+ +micro_sam
has been tested mostly with CUDA 12.1 and PyTorch [2.1.1, 2.2.0]. However, the tool and the library is not constrained to a specific PyTorch or CUDA version. So it should work fine with the standard PyTorch installation for your system.
ModuleNotFoundError: No module named 'elf.io
). What should I do?With the latest release 1.0.0, the installation from mamba and source should take care of this and install all the relevant packages for you.
+So please reinstall micro_sam
.
micro_sam
using pip?The installation is not supported via pip.
+ +importError: cannot import name 'UNETR' from 'torch_em.model'
.It's possible that you have an older version of torch-em
installed. Similar errors could often be raised from other libraries, the reasons being: a) Outdated packages installed, or b) Some non-existent module being called. If the source of such error is from micro_sam
, then a)
is most likely the reason . We recommend installing the latest version following the installation instructions.
Yes, you can use the annotator tool for:
+ +micro_sam
models on your own microscopy data, in case the provided models do not suffice your needs. One caveat: You need to annotate a few objects before-hand (micro_sam
has the potential of improving interactive segmentation with only a few annotated objects) to proceed with the supervised finetuning procedure.We currently provide three different kind of models: the default models vit_h
, vit_l
, vit_b
and vit_t
; the models for light microscopy vit_l_lm
, vit_b_lm
and vit_t_lm
; the models for electron microscopy vit_l_em_organelles
, vit_b_em_organelles
and vit_t_em_organelles
.
+You should first try the model that best fits the segmentation task your interested in, a lm
model for cell or nucleus segmentation in light microscopy or a em_organelles
model for segmenting nuclei, mitochondria or other roundish organelles in electron microscopy.
+If your segmentation problem does not meet these descriptions, or if these models don't work well, you should try one of the default models instead.
+The letter after vit
denotes the size of the image encoder in SAM, h
(huge) being the largest and t
(tiny) the smallest. The smaller models are faster but may yield worse results. We recommend to either use a vit_l
or vit_b
model, they offer the best trade-off between speed and segmentation quality.
+You can find more information on model choice here.
The Segment Anything model expects inputs of shape 1024 x 1024 pixels. Inputs that do not match this size will be internally resized to match it. Hence, applying Segment Anything to a much larger image will often lead to inferior results, or somethimes not work at all. To address this, micro_sam
implements tiling: cutting up the input image into tiles of a fixed size (with a fixed overlap) and running Segment Anything for the individual tiles. You can activate tiling with the tile_shape
parameter, which determines the size of the inner tile and halo
, which determines the size of the additional overlap.
micro_sam
annotation tools, you can specify the values for the tile_shape
and halo
via the tile_x
, tile_y
, halo_x
and halo_y
parameters in the Embedding Settings
drop-down menu.micro_sam
library in a python script, you can pass them as tuples, e.g. tile_shape=(1024, 1024), halo=(256, 256)
. See also the wholeslide annotator example.--tile_shape 1024 1024 --halo 256 256
.++ +NOTE: It's recommended to choose the
+halo
so that it is larger than half of the maximal radius of the objects you want to segment.
micro_sam
pre-computes the image embeddings produced by the vision transformer backbone in Segment Anything, and (optionally) store them on disc. I fyou are using a CPU, this step can take a while for 3d data or time-series (you will see a progress bar in the command-line interface / on the bootom right of napari). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them over to your laptop / local machine to speed this up.
micro_sam.precompute_embeddings
for this (it is installed with the rest of the software). You can specify the location of the precomputed embeddings via the embedding_path
argument.embeddings_save_path
option in the Embedding Settings
drop-down. You can later load the precomputed image embeddings by entering the path to the stored embeddings there as well.micro_sam
on a CPU?Most other processing steps that are very fast even on a CPU, the automatic segmentation step for the default Segment Anything models (typically called as the "Segment Anything" feature or AMG - Automatic Mask Generation) takes several minutes without a GPU (depending on the image size). For large volumes and time-series, segmenting an object interactively in 3d / tracking across time can take a couple of seconds with a CPU (it is very fast with a GPU).
+ +++ +HINT: All the tutorial videos have been created on CPU resources.
+
micro_sam
?You can save and load the results from the committed_objects
layer to correct segmentations you obtained from another tool (e.g. CellPose) or save intermediate annotation results. The results can be saved via File
-> Save Selected Layers (s) ...
in the napari menu-bar on top (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the segmentation_result
parameter in the CLI or python script (2d and 3d segmentation).
+If you are using an annotation tool you can load the segmentation you want to edit as segmentation layer and renae it to committed_objects
.
micro_sam
for segmenting objects. I would like to report the steps for reproducability. How can this be done?The annotation steps and segmentation results can be saved to a zarr file by providing the commit_path
in the commit
widget. This file will contain all relevant information to reproduce the segmentation.
++ +NOTE: This feature is still under development and we have not implemented rerunning the segmentation from this file yet. See this issue for details.
+
micro_sam
generalist models do not work for my data. What should I do?micro_sam
supports interactive annotation using positive and negative point prompts, box prompts and polygon drawing. You can combine multiple types of prompts to improve the segmentation quality. In case the aforementioned suggestions do not work as desired, micro_sam
also supports finetuning a model on your data (see the next section). We recommend the following: a) Check which of the provided models performs relatively good on your data, b) Choose the best model as the starting point to train your own specialist model for the desired segmentation task.
While emmitting signal ... an error ocurred in callback ... This is not a bug in psygnal. See ... above for details.
These messages occur when an internal error happens in micro_sam
. In most cases this is due to inconsistent annotations and you can fix them by clearing the annotations.
+We want to remove these errors, so we would be very grateful if you can open an issue and describe the steps you did when encountering it.
The first thing to check is: a) make sure you are using the latest version of micro_sam
(pull the latest commit from master if your installation is from source, or update the installation from conda / mamba using mamba update micro_sam
), and b) try out the steps from the 3d annotator tutorial video to verify if this shows the same behaviour (or the same errors) as you faced. For 3d images, it's important to pass the inputs in the python axis convention, ZYX.
+c) try using a different model and change the projection mode for 3d segmentation. This is also explained in the video.
micro_sam
to annotate them?Segment Anything does not work well for very small or fine-grained objects (e.g. filaments). In these cases, you could try to use tiling to improve results (see Point 3 above for details).
+ +Editing (drawing / erasing) very large 2d images or 3d volumes is known to be slow at the moment, as the objects in the layers are stored in-memory. See the related issue.
+ +"napari" is not responding.
pops up.This can happen for long running computations. You just need to wait a bit longer and the computation will finish.
+ +Yes, you can fine-tune Segment Anything on your own dataset. Here's how you can do it:
+ +micro_sam.training
library.Yes, you can fine-tune Segment Anything on your custom datasets on Kaggle (and BAND). Check out our tutorial notebook for this.
+ +TODO: explain instance segmentation labels, that you can get them by annotation with micro_sam, and dense vs. sparse annotation (for training without / with decoder)
+ +You can load your finetuned model by entering the path to its checkpoint in the custom_weights_path
field in the Embedding Settings
drop-down menu.
+If you are using the python library or CLI you can specify this path with the checkpoint_path
parameter.
micro_sam
?micro_sam
introduces a new segmentation decoder to the Segment Anything backbone, for enabling faster and accurate automatic instance segmentation, by predicting the distances to the object center and boundary as well as predicting foregrund, and performing seeded watershed-based postprocessing to obtain the instances.
+
Finetuning Segment Anything is possible in most consumer-grade GPU and CPU resources (but training being a lot slower on the CPU). For the mentioned resource, it should be possible to finetune a ViT Base (also abbreviated as vit_b
) by reducing the number of objects per image to 15.
+This parameter has the biggest impact on the VRAM consumption and quality of the finetuned model.
+You can find an overview of the resources we have tested for finetuning here.
+We also provide a the convenience function micro_sam.training.train_sam_for_configuration
that selects the best training settings for these configuration. This function is also used by the finetuning UI.
Thanks to torch-em
, a) Creating PyTorch datasets and dataloaders using the python library is convenient and supported for various data formats and data structures.
+See the tutorial notebook on how to create dataloaders using torch-em
and the documentation for details on creating your own datasets and dataloaders; and b) finetuning using the napari
tool eases the aforementioned process, by allowing you to add the input parameters (path to the directory for inputs and labels etc.) directly in the tool.
++ +NOTE: If you have images with large input shapes with a sparse density of instance segmentations, we recommend using
+sampler
for choosing the patches with valid segmentation for the finetuning purpose (see the example for PlantSeg (Root) specialist model inmicro_sam
).
TODO: move the content of https://github.com/computational-cell-analytics/micro-sam/blob/master/doc/bioimageio/validation.md here.
+1__version__ = "1.0.0" +diff --git a/micro_sam/sam_annotator/_widgets.html b/micro_sam/sam_annotator/_widgets.html index c3c45f6d..fa34fc3f 100644 --- a/micro_sam/sam_annotator/_widgets.html +++ b/micro_sam/sam_annotator/_widgets.html @@ -1671,127 +1671,142 @@1__version__ = "1.0.0post0"
1507 settings = _make_collapsible(setting_values, title="Automatic Segmentation Settings") 1508 return settings 1509 -1510 def _run_segmentation_2d(self, kwargs, i=None): -1511 pbar, pbar_signals = _create_pbar_for_threadworker() -1512 -1513 @thread_worker -1514 def seg_impl(): -1515 def pbar_init(total, description): -1516 pbar_signals.pbar_total.emit(total) -1517 pbar_signals.pbar_description.emit(description) -1518 -1519 seg = _instance_segmentation_impl( -1520 self.with_background, self.min_object_size, i=i, -1521 pbar_init=pbar_init, -1522 pbar_update=lambda update: pbar_signals.pbar_update.emit(update), -1523 **kwargs -1524 ) -1525 pbar_signals.pbar_stop.emit() -1526 return seg -1527 -1528 def update_segmentation(seg): -1529 if i is None: -1530 self._viewer.layers["auto_segmentation"].data = seg -1531 else: -1532 self._viewer.layers["auto_segmentation"].data[i] = seg -1533 self._viewer.layers["auto_segmentation"].refresh() -1534 -1535 worker = seg_impl() -1536 worker.returned.connect(update_segmentation) -1537 worker.start() -1538 return worker -1539 -1540 # We refuse to run 3D segmentation with the AMG unless we have a GPU or all embeddings -1541 # are precomputed. Otherwise this would take too long. -1542 def _allow_segment_3d(self): -1543 if self.with_decoder: -1544 return True -1545 state = AnnotatorState() -1546 predictor = state.predictor -1547 if str(predictor.device) == "cpu" or str(predictor.device) == "mps": -1548 n_slices = self._viewer.layers["auto_segmentation"].data.shape[0] -1549 embeddings_are_precomputed = (state.amg_state is not None) and (len(state.amg_state) > n_slices) -1550 if not embeddings_are_precomputed: -1551 return False -1552 return True -1553 -1554 def _run_segmentation_3d(self, kwargs): -1555 allow_segment_3d = self._allow_segment_3d() -1556 if not allow_segment_3d: -1557 val_results = { -1558 "message_type": "error", -1559 "message": "Volumetric segmentation with AMG is only supported if you have a GPU." -1560 } -1561 return _generate_message(val_results["message_type"], val_results["message"]) -1562 -1563 pbar, pbar_signals = _create_pbar_for_threadworker() -1564 -1565 @thread_worker -1566 def seg_impl(): -1567 segmentation = np.zeros_like(self._viewer.layers["auto_segmentation"].data) -1568 offset = 0 -1569 -1570 def pbar_init(total, description): -1571 pbar_signals.pbar_total.emit(total) -1572 pbar_signals.pbar_description.emit(description) -1573 -1574 pbar_init(segmentation.shape[0], "Segment volume") -1575 -1576 # Further optimization: parallelize if state is precomputed for all slices -1577 for i in range(segmentation.shape[0]): -1578 seg = _instance_segmentation_impl(self.with_background, self.min_object_size, i=i, **kwargs) -1579 seg_max = seg.max() -1580 if seg_max == 0: -1581 continue -1582 seg[seg != 0] += offset -1583 offset = seg_max + offset -1584 segmentation[i] = seg -1585 pbar_signals.pbar_update.emit(1) -1586 -1587 pbar_signals.pbar_reset.emit() -1588 segmentation = merge_instance_segmentation_3d( -1589 segmentation, beta=0.5, with_background=self.with_background, -1590 gap_closing=self.gap_closing, min_z_extent=self.min_extent, -1591 verbose=True, pbar_init=pbar_init, -1592 pbar_update=lambda update: pbar_signals.pbar_update.emit(1), -1593 ) -1594 pbar_signals.pbar_stop.emit() -1595 return segmentation -1596 -1597 def update_segmentation(segmentation): -1598 self._viewer.layers["auto_segmentation"].data = segmentation -1599 self._viewer.layers["auto_segmentation"].refresh() -1600 -1601 worker = seg_impl() -1602 worker.returned.connect(update_segmentation) -1603 worker.start() -1604 return worker -1605 -1606 def __call__(self): -1607 if _validate_embeddings(self._viewer): -1608 return None -1609 -1610 if self.with_decoder: -1611 kwargs = { -1612 "center_distance_threshold": self.center_distance_thresh, -1613 "boundary_distance_threshold": self.boundary_distance_thresh, -1614 "min_size": self.min_object_size, -1615 } -1616 else: -1617 kwargs = { -1618 "pred_iou_thresh": self.pred_iou_thresh, -1619 "stability_score_thresh": self.stability_score_thresh, -1620 "box_nms_thresh": self.box_nms_thresh, -1621 } -1622 if self.volumetric and self.apply_to_volume: -1623 worker = self._run_segmentation_3d(kwargs) -1624 elif self.volumetric and not self.apply_to_volume: -1625 i = int(self._viewer.cursor.position[0]) -1626 worker = self._run_segmentation_2d(kwargs, i=i) -1627 else: -1628 worker = self._run_segmentation_2d(kwargs) -1629 _select_layer(self._viewer, "auto_segmentation") -1630 return worker +1510 def _empty_segmentation_warning(self): +1511 msg = "The automatic segmentation result does not contain any objects." +1512 msg += "Setting a smaller value for 'min_object_size' may help." +1513 if not self.with_decoder: +1514 msg += "Setting smaller values for 'pred_iou_thresh' and 'stability_score_thresh' may also help." +1515 val_results = {"message_type": "error", "message": msg} +1516 return _generate_message(val_results["message_type"], val_results["message"]) +1517 +1518 def _run_segmentation_2d(self, kwargs, i=None): +1519 pbar, pbar_signals = _create_pbar_for_threadworker() +1520 +1521 @thread_worker +1522 def seg_impl(): +1523 def pbar_init(total, description): +1524 pbar_signals.pbar_total.emit(total) +1525 pbar_signals.pbar_description.emit(description) +1526 +1527 seg = _instance_segmentation_impl( +1528 self.with_background, self.min_object_size, i=i, +1529 pbar_init=pbar_init, +1530 pbar_update=lambda update: pbar_signals.pbar_update.emit(update), +1531 **kwargs +1532 ) +1533 pbar_signals.pbar_stop.emit() +1534 return seg +1535 +1536 def update_segmentation(seg): +1537 is_empty = seg.max() == 0 +1538 if is_empty: +1539 self._empty_segmentation_warning() +1540 +1541 if i is None: +1542 self._viewer.layers["auto_segmentation"].data = seg +1543 else: +1544 self._viewer.layers["auto_segmentation"].data[i] = seg +1545 self._viewer.layers["auto_segmentation"].refresh() +1546 +1547 worker = seg_impl() +1548 worker.returned.connect(update_segmentation) +1549 worker.start() +1550 return worker +1551 +1552 # We refuse to run 3D segmentation with the AMG unless we have a GPU or all embeddings +1553 # are precomputed. Otherwise this would take too long. +1554 def _allow_segment_3d(self): +1555 if self.with_decoder: +1556 return True +1557 state = AnnotatorState() +1558 predictor = state.predictor +1559 if str(predictor.device) == "cpu" or str(predictor.device) == "mps": +1560 n_slices = self._viewer.layers["auto_segmentation"].data.shape[0] +1561 embeddings_are_precomputed = (state.amg_state is not None) and (len(state.amg_state) > n_slices) +1562 if not embeddings_are_precomputed: +1563 return False +1564 return True +1565 +1566 def _run_segmentation_3d(self, kwargs): +1567 allow_segment_3d = self._allow_segment_3d() +1568 if not allow_segment_3d: +1569 val_results = { +1570 "message_type": "error", +1571 "message": "Volumetric segmentation with AMG is only supported if you have a GPU." +1572 } +1573 return _generate_message(val_results["message_type"], val_results["message"]) +1574 +1575 pbar, pbar_signals = _create_pbar_for_threadworker() +1576 +1577 @thread_worker +1578 def seg_impl(): +1579 segmentation = np.zeros_like(self._viewer.layers["auto_segmentation"].data) +1580 offset = 0 +1581 +1582 def pbar_init(total, description): +1583 pbar_signals.pbar_total.emit(total) +1584 pbar_signals.pbar_description.emit(description) +1585 +1586 pbar_init(segmentation.shape[0], "Segment volume") +1587 +1588 # Further optimization: parallelize if state is precomputed for all slices +1589 for i in range(segmentation.shape[0]): +1590 seg = _instance_segmentation_impl(self.with_background, self.min_object_size, i=i, **kwargs) +1591 seg_max = seg.max() +1592 if seg_max == 0: +1593 continue +1594 seg[seg != 0] += offset +1595 offset = seg_max + offset +1596 segmentation[i] = seg +1597 pbar_signals.pbar_update.emit(1) +1598 +1599 pbar_signals.pbar_reset.emit() +1600 segmentation = merge_instance_segmentation_3d( +1601 segmentation, beta=0.5, with_background=self.with_background, +1602 gap_closing=self.gap_closing, min_z_extent=self.min_extent, +1603 verbose=True, pbar_init=pbar_init, +1604 pbar_update=lambda update: pbar_signals.pbar_update.emit(1), +1605 ) +1606 pbar_signals.pbar_stop.emit() +1607 return segmentation +1608 +1609 def update_segmentation(segmentation): +1610 is_empty = segmentation.max() == 0 +1611 if is_empty: +1612 self._empty_segmentation_warning() +1613 self._viewer.layers["auto_segmentation"].data = segmentation +1614 self._viewer.layers["auto_segmentation"].refresh() +1615 +1616 worker = seg_impl() +1617 worker.returned.connect(update_segmentation) +1618 worker.start() +1619 return worker +1620 +1621 def __call__(self): +1622 if _validate_embeddings(self._viewer): +1623 return None +1624 +1625 if self.with_decoder: +1626 kwargs = { +1627 "center_distance_threshold": self.center_distance_thresh, +1628 "boundary_distance_threshold": self.boundary_distance_thresh, +1629 "min_size": self.min_object_size, +1630 } +1631 else: +1632 kwargs = { +1633 "pred_iou_thresh": self.pred_iou_thresh, +1634 "stability_score_thresh": self.stability_score_thresh, +1635 "box_nms_thresh": self.box_nms_thresh, +1636 } +1637 if self.volumetric and self.apply_to_volume: +1638 worker = self._run_segmentation_3d(kwargs) +1639 elif self.volumetric and not self.apply_to_volume: +1640 i = int(self._viewer.cursor.position[0]) +1641 worker = self._run_segmentation_2d(kwargs, i=i) +1642 else: +1643 worker = self._run_segmentation_2d(kwargs) +1644 _select_layer(self._viewer, "auto_segmentation") +1645 return worker
Segment Anything for Microscopy implements automatic and interactive annotation for microscopy data. It is built on top of Segment Anything by Meta AI and specializes it for microscopy and other bio-imaging data.\nIts core components are:
\n\nmicro_sam
tools for interactive data annotation, built as napari plugin.micro_sam
library to apply Segment Anything to 2d and 3d data or fine-tune it on your data.micro_sam
models that are fine-tuned on publicly available microscopy data and that are available on BioImage.IO.Based on these components micro_sam
enables fast interactive and automatic annotation for microscopy data, like interactive cell segmentation from bounding boxes:
micro_sam
is now available as stable version 1.0 and we will not change its user interface significantly in the foreseeable future.\nWe are still working on improving and extending its functionality. The current roadmap includes:
If you run into any problems or have questions please open an issue or reach out via image.sc using the tag micro-sam
.
You can install micro_sam
via mamba:
$ mamba install -c conda-forge micro_sam\n
\n\nWe also provide installers for Windows and Linux. For more details on the available installation options check out the installation section.
\n\nAfter installing micro_sam
you can start napari and select the annotation tool you want to use from Plugins->Segment Anything for Microscopy
. Check out the quickstart tutorial video for a short introduction and the annotation tool section for details.
The micro_sam
python library can be imported via
import micro_sam\n
\nIt is explained in more detail here.
\n\nWe provide different finetuned models for microscopy that can be used within our tools or any other tool that supports Segment Anything. See finetuned models for details on the available models.\nYou can also train models on your own data, see here for details.
\n\nIf you are using micro_sam
in your research please cite
vit-tiny
models please also cite Mobile SAM.There are three ways to install micro_sam
:
mamba is a drop-in replacement for conda, but much faster.\nWhile the steps below may also work with conda
, we highly recommend using mamba
.\nYou can follow the instructions here to install mamba
.
IMPORTANT: Make sure to avoid installing anything in the base environment.
\n\nmicro_sam
can be installed in an existing environment via:
$ mamba install -c conda-forge micro_sam\n
\n\nor you can create a new environment (here called micro-sam
) with it via:
$ mamba create -c conda-forge -n micro-sam micro_sam\n
\n\nif you want to use the GPU you need to install PyTorch from the pytorch
channel instead of conda-forge
. For example:
$ mamba create -c pytorch -c nvidia -c conda-forge micro_sam pytorch pytorch-cuda=12.1\n
\n\nYou may need to change this command to install the correct CUDA version for your system, see https://pytorch.org/ for details.
\n\nTo install micro_sam
from source, we recommend to first set up an environment with the necessary requirements:
To create one of these environments and install micro_sam
into it follow these steps
$ git clone https://github.com/computational-cell-analytics/micro-sam\n
\n\n$ cd micro-sam\n
\n\n$ mamba env create -f <ENV_FILE>.yaml\n
\n\n$ mamba activate sam\n
\n\nmicro_sam
:$ pip install -e .\n
\n\nWe also provide installers for Linux and Windows:
\n\n\n\nThe installers will not enable you to use a GPU, so if you have one then please consider installing micro_sam
via mamba instead. They will also not enable using the python library.
Linux Installer:
\n\nTo use the installer:
\n\n$ chmod +x micro_sam-0.2.0post1-Linux-x86_64.sh
$./micro_sam-0.2.0post1-Linux-x86_64.sh$
\nmicro_sam
during the installation. By default it will be installed in $HOME/micro_sam
.micro_sam
files to the installation directory..../micro_sam/bin/micro_sam.annotator
.\n.../micro_sam/bin
to your PATH
or set a softlink to .../micro_sam/bin/micro_sam.annotator
.Windows Installer:
\n\nJust Me(recommended)
or All Users(requires admin privileges)
.C:\\Users\\<Username>\\micro_sam
for Just Me
installation or in C:\\ProgramData\\micro_sam
for All Users
.\n.\\micro_sam\\Scripts\\micro_sam.annotator.exe
or with the command .\\micro_sam\\Scripts\\micro_sam.annotator.exe
from the Command Prompt.micro_sam
provides applications for fast interactive 2d segmentation, 3d segmentation and tracking.\nSee an example for interactive cell segmentation in phase-contrast microscopy (left), interactive segmentation\nof mitochondria in volume EM (middle) and interactive tracking of cells (right).
\n\n
\n\nThe annotation tools can be started from the napari plugin menu, the command line or from python scripts.\nThey are built as napari plugin and make use of existing napari functionality wherever possible. If you are not familiar with napari yet, start here.\nThe micro_sam
tools mainly use the point layer, shape layer and label layer.
The annotation tools are explained in detail below. We also provide video tutorials.
\n\nThe annotation tools can be started from the napari plugin menu:\n
\n\nThe 2d annotator can be started by
\n\nAnnotator 2d
in the plugin menu.$ micro_sam.annotator_2d
in the command line.micro_sam.sam_annotator.annotator_2d
in a python script. Check out examples/annotator_2d.py for details. The user interface of the 2d annotator looks like this:
\n\n\n\nIt contains the following elements:
\n\nprompts
: shape layer that is used to provide box prompts to SegmentAnything. Annotations can be given as rectangle (box prompt in the image), ellipse or polygon.point_prompts
: point layer that is used to provide point prompts to SegmentAnything. Positive prompts (green points) for marking the object you want to segment, negative prompts (red points) for marking the outside of the object.committed_objects
: label layer with the objects that have already been segmented.auto_segmentation
: label layer with the results from automatic instance segmentation.current_object
: label layer for the object(s) you're currently segmenting.Embedding Settings
contain advanced settings for loading cached embeddings from file or using tiled embeddings.T
.Segment Object
(or pressing S
) will run segmentation for the current prompts. The result is displayed in current_object
. Activating batched
enables segmentation of multiple objects with point prompts. In this case an object will be segmented per positive prompt.Automatic Segmentation
will segment all objects n the image. The results will be displayed in the auto_segmentation
layer. We support two different methods for automatic segmentation: automatic mask generation (supported for all models) and instance segmentation with an additional decoder (only supported for our models).\nChanging the parameters under Automatic Segmentation Settings
controls the segmentation results, check the tooltips for details.Commit
(or pressing C
) the result from the selected layer (either current_object
or auto_segmentation
) will be transferred from the respective layer to committed_objects
.\nWhen commit_path
is given the results will automatically be saved there.Clear Annotations
(or pressing Shift + C
) will clear the current annotations and the current segmentation.Note that point prompts and box prompts can be combined. When you're using point prompts you can only segment one object at a time, unless the batched
mode is activated. With box prompts you can segment several objects at once, both in the normal and batched
mode.
Check out this video for a tutorial for this tool.
\n\nThe 3d annotator can be started by
\n\nAnnotator 3d
in the plugin menu.$ micro_sam.annotator_3d
in the command line.micro_sam.sam_annotator.annotator_3d
in a python script. Check out examples/annotator_3d.py for details.The user interface of the 3d annotator looks like this:
\n\n\n\nMost elements are the same as in the 2d annotator:
\n\nSegment All Slices
(or Shift + S
) will extend the segmentation for the current object across the volume by projecting prompts across slices. The parameters for prompt projection can be set in Segmentation Settings
, please refer to the tooltips for details.Apply to Volume
needs to be checked, otherwise only the current slice will be segmented. Note that 3D segmentation can take quite long without a GPU.all slices
is set all annotations will be cleared, otherwise they are only cleared for the current slice.Note that you can only segment one object at a time using the interactive segmentation functionality with this tool.
\n\nCheck out this video for a tutorial for the 3d annotation tool.
\n\nThe tracking annotator can be started by
\n\nAnnotator Tracking
in the plugin menu.$ micro_sam.annotator_tracking
in the command line.micro_sam.sam_annotator.annotator_tracking
in a python script. Check out examples/annotator_tracking.py for details. The user interface of the tracking annotator looks like this:
\n\n\n\nMost elements are the same as in the 2d annotator:
\n\nauto_segmentation
layer.track_state
is used to indicate that the object you are tracking is dividing in the current frame. track_id
is used to select which of the tracks after division you are following.Track Object
(or press Shift + S
) to segment the current object across time.Note that the tracking annotator only supports 2d image data, volumetric data is not supported. We also do not support automatic tracking yet.
\n\nCheck out this video for a tutorial for how to use the tracking annotation tool.
\n\nThe image series annotation tool enables running the 2d annotator or 2d annotator for multiple images that are saved within an folder. This makes it convenient to annotate many images without having to close the tool. It can be started by
\n\nImage Series Annotator
in the plugin menu.$ micro_sam.image_series_annotator
in the command line.micro_sam.sam_annotator.image_series_annotator
in a python script. Check out examples/image_series_annotator.py for details. When starting this tool via the plugin menu the following interface opens:
\n\n\n\nYou can select the folder where your image data is saved with Input Folder
. The annotation results will be saved in Output Folder
.\nYou can specify a rule for loading only a subset of images via pattern
, for example *.tif
to only load tif images. Set is_volumetric
if the data you want to annotate is 3d. The rest of the options are settings for the image embedding computation and are the same as for the embedding menu (see above).\nOnce you click Annotate Images
the images from the folder you have specified will be loaded and the annotation tool is started for them.
This menu will not open if you start the image series annotator from the command line or via python. In this case the input folder and other settings are passed as parameters instead.
\n\nCheck out this video for a tutorial for how to use the image series annotator.
\n\nWe also provide a graphical tool for finetuning models on your own data. It can be started by clicking Finetuning
in the plugin menu.
Note: if you know a bit of python programming we recommend to use a script for model finetuning instead. This will give you more options to configure the training. See these instructions for details.
\n\nWhen starting this tool via the plugin menu the following interface opens:
\n\n\n\nYou can select the image data via Path to images
. We can either load images from a folder or select a single file for training. By providing Image data key
you can either provide a pattern for selecting files from a folder or provide an internal filepath for hdf5, zarr or similar fileformats.
You can select the label data via Path to labels
and Label data key
, following the same logic as for the image data. We expect label masks stored in the same size as the image data for training. You can for example use annotations created with one of the micro_sam
annotation tools for this, they are stored in the correct format!
The Configuration
option allows you to choose the hardware configuration for training. We try to automatically select the correct setting for your system, but it can also be changed. Please refer to the tooltips for the other parameters.
tile_shape
, which determines the size of the inner tile and halo
, which determines the size of the additional overlap.\nmicro_sam
GUI you can specify the values for the halo
and tile_shape
via the Tile X
, Tile Y
, Halo X
and Halo Y
by clicking on Embeddings Settings
.tile_shape=(1024, 1024), halo=(128, 128)
. See also the wholeslide_annotator example.--tile_shape 1024 1024 --halo 128 128
embedding_path
(-e
in the CLI).halo
such that it is larger than half of the maximal radius of the objects your segmenting.micro_sam.precompute_embeddings
for this (it is installed with the rest of the applications). You can specify the location of the precomputed embeddings via the embedding_path
argument.\nembeddings save path
. Existing embeddings are loaded from the specified path or embeddings are computed and the path is used to save them. model_type
argument and either set it to vit_b
or to vit_l
(default is vit_h
). However, this may lead to worse results.committed_objects
/ committed_tracks
layer to correct segmentations you obtained from another tool (e.g. CellPose) or to save intermediate annotation results. The results can be saved via File -> Save Selected Layer(s) ...
in the napari menu (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the segmentation_result
(2d and 3d segmentation) or tracking_result
(tracking) argument.The python library can be imported via
\n\nimport micro_sam\n
\nThis library extends the Segment Anything library and
\n\nmicro_sam.prompt_based_segmentation
.micro_sam.instance_segmentation
.micro_sam.training
.micro_sam.evaluation
.You can import these sub-modules via
\n\nimport micro_sam.prompt_based_segmentation\nimport micro_sam.instance_segmentation\n# etc.\n
\nThis functionality is used to implement the interactive annotation tools in micro_sam.sam_annotator
and can be used as a standalone python library.\nWe provide jupyter notebooks that demonstrate how to use it here. You can find the full library documentation by scrolling to the end of this page.
We reimplement the training logic described in the Segment Anything publication to enable finetuning on custom data.\nWe use this functionality to provide the finetuned microscopy models and it can also be used to train models on your own data.\nIn fact the best results can be expected when finetuning on your own data, and we found that it does not require much annotated training data to get siginficant improvements in model performance.\nSo a good strategy is to annotate a few images with one of the provided models using our interactive annotation tools and, if the model is not working as good as required for your use-case, finetune on the annotated data.\n
\n\nThe training logic is implemented in micro_sam.training
and is based on torch-em. Check out the finetuning notebook to see how to use it.
We also support training an additional decoder for automatic instance segmentation. This yields better results than the automatic mask generation of segment anything and is significantly faster.\nThe notebook explains how to activate training it together with the rest of SAM and how to then use it.
\n\nMore advanced examples, including quantitative and qualitative evaluation, of finetuned models can be found in finetuning, which contains the code for training and evaluating our models.
\n\nIn addition to the original Segment Anything models, we provide models that are finetuned on microscopy data.\nThe additional models are available in the bioimage.io modelzoo and are also hosted on zenodo.
\n\nWe currently offer the following models:
\n\nvit_h
: Default Segment Anything model with vit-h backbone.vit_l
: Default Segment Anything model with vit-l backbone.vit_b
: Default Segment Anything model with vit-b backbone.vit_t
: Segment Anything model with vit-tiny backbone. From the Mobile SAM publication. vit_l_lm
: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-l backbone. (zenodo, bioimage.io)vit_b_lm
: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-b backbone. (zenodo, bioimage.io)vit_t_lm
: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-t backbone. (zenodo, bioimage.io)vit_l_em_organelles
: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with vit-l backbone. (zenodo, bioimage.io)vit_b_em_organelles
: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with vit-b backbone. (zenodo, bioimage.io)vit_t_em_organelles
: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with vit-t backbone. (zenodo, bioimage.io)See the two figures below of the improvements through the finetuned model for LM and EM data.
\n\n\n\n\n\nYou can select which model to use for annotation by selecting the corresponding name in the embedding menu:
\n\n\n\nTo use a specific model in the python library you need to pass the corresponding name as value to the model_type
parameter exposed by all relevant functions.\nSee for example the 2d annotator example.
As a rule of thumb:
\n\nvit_l_lm
or vit_b_lm
model for segmenting cells or nuclei in light microscopy. The larger model (vit_l_lm
) yields a bit better segmentation quality, especially for automatic segmentation, but needs more computational resources.vit_l_em_organelles
or vit_b_em_organelles
models for segmenting mitochondria, nuclei or other roundish organelles in electron microscopy.vit_t_...
models run much faster than other models, but yield inferior quality for many applications. It can still make sense to try them for your use-case if your working on a laptop and want to annotate many images or volumetric data. See also the figures above for examples where the finetuned models work better than the default models.\nWe are working on further improving these models and adding new models for other biomedical imaging domains.
\n\nPrevious versions of our models are available on zenodo:
\n\nWe do not recommend to use these models since our new models improve upon them significantly. But we provide the links here in case they are needed to reproduce older segmentation workflows.
\n\nWe welcome new contributions!
\n\nFirst, discuss your idea by opening a new issue in micro-sam.
\n\nThis allows you to ask questions, and have the current developers make suggestions about the best way to implement your ideas.
\n\nYou may also find it helpful to look at this developer guide, which explains the organization of the micro-sam code.
\n\nWe use git for version control.
\n\nClone the repository, and checkout the development branch:
\n\ngit clone https://github.com/computational-cell-analytics/micro-sam.git\ncd micro-sam\ngit checkout dev\n
\n\nWe use conda to manage our environments. If you don't have this already, install miniconda or mamba to get started.
\n\nNow you can create the environment, install user and develoepr dependencies, and micro-sam as an editable installation:
\n\nconda env create environment-gpu.yml\nconda activate sam\npython -m pip install requirements-dev.txt\npython -m pip install -e .\n
\n\nNow it's time to make your code changes.
\n\nTypically, changes are made branching off from the development branch. Checkout dev
and then create a new branch to work on your changes.
git checkout dev\ngit checkout -b my-new-feature\n
\n\nWe use google style python docstrings to create documentation for all new code.
\n\nYou may also find it helpful to look at this developer guide, which explains the organization of the micro-sam code.
\n\nThe tests for micro-sam are run with pytest
\n\nTo run the tests:
\n\npytest\n
\n\nIf you have written new code, you will need to write tests to go with it.
\n\nUnit tests are the preferred style of tests for user contributions. Unit tests check small, isolated parts of the code for correctness. If your code is too complicated to write unit tests easily, you may need to consider breaking it up into smaller functions that are easier to test.
\n\nIn cases where tests must use the napari viewer, these tips might be helpful (in particular, the make_napari_viewer_proxy
fixture).
These kinds of tests should be used only in limited circumstances. Developers are advised to prefer smaller unit tests, and avoid integration tests wherever possible.
\n\nPytest uses the pytest-cov plugin to automatically determine which lines of code are covered by tests.
\n\nA short summary report is printed to the terminal output whenever you run pytest. The full results are also automatically written to a file named coverage.xml
.
The Coverage Gutters VSCode extension is useful for visualizing which parts of the code need better test coverage. PyCharm professional has a similar feature, and you may be able to find similar tools for your preferred editor.
\n\nWe also use codecov.io to display the code coverage results from our Github Actions continuous integration.
\n\nOnce you've made changes to the code and written some tests to go with it, you are ready to open a pull request. You can mark your pull request as a draft if you are still working on it, and still get the benefit of discussing the best approach with maintainers.
\n\nRemember that typically changes to micro-sam are made branching off from the development branch. So, you will need to open your pull request to merge back into the dev
branch like this.
We use pdoc to build the documentation.
\n\nTo build the documentation locally, run this command:
\n\npython build_doc.py\n
\n\nThis will start a local server and display the HTML documentation. Any changes you make to the documentation will be updated in real time (you may need to refresh your browser to see the changes).
\n\nIf you want to save the HTML files, append --out
to the command, like this:
python build_doc.py --out\n
\n\nThis will save the HTML files into a new directory named tmp
.
You can add content to the documentation in two ways:
\n\ndoc
directory.\nmicro_sam/__init__.py
module docstring (eg: .. include:: ../doc/my_amazing_new_docs_page.md
). Otherwise it will not be included in the final documentation build!There are a number of options you can use to benchmark performance, and identify problems like slow run times or high memory use in micro-sam.
\n\n\n\nThere is a performance benchmark script available in the micro-sam repository at development/benchmark.py
.
To run the benchmark script:
\n\npython development/benchmark.py --model_type vit_t --device cpu`\n
\n\nFor more details about the user input arguments for the micro-sam benchmark script, see the help:
\n\npython development/benchmark.py --help\n
\n\nFor more detailed line by line performance results, we can use line-profiler.
\n\n\n\n\nline_profiler is a module for doing line-by-line profiling of functions. kernprof is a convenient script for running either line_profiler or the Python standard library's cProfile or profile modules, depending on what is available.
\n
To do line-by-line profiling:
\n\npython -m pip install line_profiler
@profile
decorator to any function in the call stackkernprof -lv benchmark.py --model_type vit_t --device cpu
For more details about how to use line-profiler and kernprof, see the documentation.
\n\nFor more details about the user input arguments for the micro-sam benchmark script, see the help:
\n\npython development/benchmark.py --help\n
\n\nFor more detailed visualizations of profiling results, we use snakeviz.
\n\n\n\n\nSnakeViz is a browser based graphical viewer for the output of Python\u2019s cProfile module
\n
python -m pip install snakeviz
python -m cProfile -o program.prof benchmark.py --model_type vit_h --device cpu
snakeviz program.prof
For more details about how to use snakeviz, see the documentation.
\n\nIf you need to investigate memory use specifically, we use memray.
\n\n\n\n\nMemray is a memory profiler for Python. It can track memory allocations in Python code, in native extension modules, and in the Python interpreter itself. It can generate several different types of reports to help you analyze the captured memory usage data. While commonly used as a CLI tool, it can also be used as a library to perform more fine-grained profiling tasks.
\n
For more details about how to use memray, see the documentation.
\n\nBAND is a service offered by EMBL Heidelberg that gives access to a virtual desktop for image analysis tasks. It is free to use and micro_sam
is installed there.\nIn order to use BAND and start micro_sam
on it follow these steps:
/scratch/cajal-connectomics/hela-2d-image.png
. Select it via Select image. (see screenshot)\nTo copy data to and from BAND you can use any cloud storage, e.g. ownCloud, dropbox or google drive. For this, it's important to note that copy and paste, which you may need for accessing links on BAND, works a bit different in BAND:
\n\nctrl + c
).ctrl + shift + alt
. This will open a side window where you can paste your text via ctrl + v
.ctrl + c
.ctrl + shift + alt
and paste the text in band via ctrl + v
The video below shows how to copy over a link from owncloud and then download the data on BAND using copy and paste:
\n\n\n"}, {"fullname": "micro_sam.bioimageio", "modulename": "micro_sam.bioimageio", "kind": "module", "doc": "\n"}, {"fullname": "micro_sam.bioimageio.model_export", "modulename": "micro_sam.bioimageio.model_export", "kind": "module", "doc": "\n"}, {"fullname": "micro_sam.bioimageio.model_export.DEFAULTS", "modulename": "micro_sam.bioimageio.model_export", "qualname": "DEFAULTS", "kind": "variable", "doc": "\n", "default_value": "{'authors': [Author(affiliation='University Goettingen', email=None, orcid=None, name='Anwai Archit', github_user='anwai98'), Author(affiliation='University Goettingen', email=None, orcid=None, name='Constantin Pape', github_user='constantinpape')], 'description': 'Finetuned Segment Anything Model for Microscopy', 'cite': [CiteEntry(text='Archit et al. Segment Anything for Microscopy', doi='10.1101/2023.08.21.554208', url=None)], 'tags': ['segment-anything', 'instance-segmentation']}"}, {"fullname": "micro_sam.bioimageio.model_export.export_sam_model", "modulename": "micro_sam.bioimageio.model_export", "qualname": "export_sam_model", "kind": "function", "doc": "Export SAM model to BioImage.IO model format.
\n\nThe exported model can be uploaded to bioimage.io and\nbe used in tools that support the BioImage.IO model format.
\n\nimage
.\nIt is used to derive prompt inputs for the model.Wrapper around the SamPredictor.
\n\nThis model supports the same functionality as SamPredictor and can provide mask segmentations\nfrom box, point or mask input prompts.
\n\nInitializes internal Module state, shared by both nn.Module and ScriptModule.
\n", "signature": "(model_type: str)"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.sam", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.sam", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.load_state_dict", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.load_state_dict", "kind": "function", "doc": "Copies parameters and buffers from state_dict
into\nthis module and its descendants. If strict
is True
, then\nthe keys of state_dict
must exactly match the keys returned\nby this module's ~torch.nn.Module.state_dict()
function.
If assign
is True
the optimizer must be created after\nthe call to load_state_dict
.
state_dict
match the keys returned by this module's\n~torch.nn.Module.state_dict()
function. Default: True
False
, the properties of the tensors in the current\nmodule are preserved while when True
, the properties of the\nTensors in the state dict are preserved.\nDefault: False
\n\n\n\n
NamedTuple
withmissing_keys
andunexpected_keys
fields:\n * missing_keys is a list of str containing the missing keys\n * unexpected_keys is a list of str containing the unexpected keys
\n\n", "signature": "(self, state):", "funcdef": "def"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.forward", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.forward", "kind": "function", "doc": "If a parameter or buffer is registered as
\nNone
and its corresponding key\n exists instate_dict
,load_state_dict()
will raise a\nRuntimeError
.
Returns:
\n", "signature": "(\tself,\timage: torch.Tensor,\tbox_prompts: Optional[torch.Tensor] = None,\tpoint_prompts: Optional[torch.Tensor] = None,\tpoint_labels: Optional[torch.Tensor] = None,\tmask_prompts: Optional[torch.Tensor] = None,\tembeddings: Optional[torch.Tensor] = None) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation", "modulename": "micro_sam.evaluation", "kind": "module", "doc": "Functionality for evaluating Segment Anything models on microscopy data.
\n"}, {"fullname": "micro_sam.evaluation.evaluation", "modulename": "micro_sam.evaluation.evaluation", "kind": "module", "doc": "Evaluation functionality for segmentation predictions from micro_sam.evaluation.automatic_mask_generation
\nand micro_sam.evaluation.inference
.
Run evaluation for instance segmentation predictions.
\n\n\n\n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprediction_paths: List[Union[str, os.PathLike]],\tsave_path: Union[str, os.PathLike, NoneType] = None,\tverbose: bool = True) -> pandas.core.frame.DataFrame:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.evaluation.run_evaluation_for_iterative_prompting", "modulename": "micro_sam.evaluation.evaluation", "qualname": "run_evaluation_for_iterative_prompting", "kind": "function", "doc": "A DataFrame that contains the evaluation results.
\n
Run evaluation for iterative prompt-based segmentation predictions.
\n\n\n\n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprediction_root: Union[os.PathLike, str],\texperiment_folder: Union[os.PathLike, str],\tstart_with_box_prompt: bool = False,\toverwrite_results: bool = False) -> pandas.core.frame.DataFrame:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments", "modulename": "micro_sam.evaluation.experiments", "kind": "module", "doc": "A DataFrame that contains the evaluation results.
\n
Predefined experiment settings for experiments with different prompt strategies.
\n"}, {"fullname": "micro_sam.evaluation.experiments.ExperimentSetting", "modulename": "micro_sam.evaluation.experiments", "qualname": "ExperimentSetting", "kind": "variable", "doc": "\n", "default_value": "typing.Dict"}, {"fullname": "micro_sam.evaluation.experiments.full_experiment_settings", "modulename": "micro_sam.evaluation.experiments", "qualname": "full_experiment_settings", "kind": "function", "doc": "The full experiment settings.
\n\n\n\n", "signature": "(\tuse_boxes: bool = False,\tpositive_range: Optional[List[int]] = None,\tnegative_range: Optional[List[int]] = None) -> List[Dict]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments.default_experiment_settings", "modulename": "micro_sam.evaluation.experiments", "qualname": "default_experiment_settings", "kind": "function", "doc": "The list of experiment settings.
\n
The three default experiment settings.
\n\nFor the default experiments we use a single positive prompt,\ntwo positive and four negative prompts and box prompts.
\n\n\n\n", "signature": "() -> List[Dict]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments.get_experiment_setting_name", "modulename": "micro_sam.evaluation.experiments", "qualname": "get_experiment_setting_name", "kind": "function", "doc": "The list of experiment settings.
\n
Get the name for the given experiment setting.
\n\n\n\n", "signature": "(setting: Dict) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference", "modulename": "micro_sam.evaluation.inference", "kind": "module", "doc": "The name for this experiment setting.
\n
Inference with Segment Anything models and different prompt strategies.
\n"}, {"fullname": "micro_sam.evaluation.inference.precompute_all_embeddings", "modulename": "micro_sam.evaluation.inference", "qualname": "precompute_all_embeddings", "kind": "function", "doc": "Precompute all image embeddings.
\n\nTo enable running different inference tasks in parallel afterwards.
\n\nPrecompute all point prompts.
\n\nTo enable running different inference tasks in parallel afterwards.
\n\nRun segment anything inference for multiple images using prompts derived from groundtruth.
\n\nRun segment anything inference for multiple images using prompts iteratively\n derived from model outputs and groundtruth
\n\nInference and evaluation for the automatic instance segmentation functionality.
\n"}, {"fullname": "micro_sam.evaluation.instance_segmentation.default_grid_search_values_amg", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "default_grid_search_values_amg", "kind": "function", "doc": "Default grid-search parameter for AMG-based instance segmentation.
\n\nReturn grid search values for the two most important parameters:
\n\npred_iou_thresh
, the threshold for keeping objects according to the IoU predicted by the model.stability_score_thresh
, the theshold for keepong objects according to their stability.pred_iou_thresh
used in the gridsearch.\nBy default values in the range from 0.6 to 0.9 with a stepsize of 0.025 will be used.stability_score_thresh
used in the gridsearch.\nBy default values in the range from 0.6 to 0.9 with a stepsize of 0.025 will be used.\n\n", "signature": "(\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None) -> Dict[str, List[float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.default_grid_search_values_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "default_grid_search_values_instance_segmentation_with_decoder", "kind": "function", "doc": "The values for grid search.
\n
Default grid-search parameter for decoder-based instance segmentation.
\n\ncenter_distance_threshold
used in the gridsearch.\nBy default values in the range from 0.3 to 0.7 with a stepsize of 0.1 will be used.boundary_distance_threshold
used in the gridsearch.\nBy default values in the range from 0.3 to 0.7 with a stepsize of 0.1 will be used.distance_smoothing
used in the gridsearch.\nBy default values in the range from 1.0 to 2.0 with a stepsize of 0.1 will be used.min_size
used in the gridsearch.\nBy default the values 50, 100 and 200 are used.\n\n", "signature": "(\tcenter_distance_threshold_values: Optional[List[float]] = None,\tboundary_distance_threshold_values: Optional[List[float]] = None,\tdistance_smoothing_values: Optional[List[float]] = None,\tmin_size_values: Optional[List[float]] = None) -> Dict[str, List[float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_grid_search", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_grid_search", "kind": "function", "doc": "The values for grid search.
\n
Run grid search for automatic mask generation.
\n\nThe parameters and their respective value ranges for the grid search are specified via the\n'grid_search_values' argument. For example, to run a grid search over the parameters 'pred_iou_thresh'\nand 'stability_score_thresh', you can pass the following:
\n\ngrid_search_values = {\n \"pred_iou_thresh\": [0.6, 0.7, 0.8, 0.9],\n \"stability_score_thresh\": [0.6, 0.7, 0.8, 0.9],\n}\n
\n\nAll combinations of the parameters will be checked.
\n\nYou can use the functions default_grid_search_values_instance_segmentation_with_decoder
\nor default_grid_search_values_amg
to get the default grid search parameters for the two\nrespective instance segmentation methods.
generate
function.generate
method of the segmenter.Run inference for automatic mask generation.
\n\ngenerate
method of the segmenter.Evaluate gridsearch results.
\n\n\n\n", "signature": "(\tresult_dir: Union[str, os.PathLike],\tgrid_search_parameters: List[str],\tcriterion: str = 'mSA') -> Tuple[Dict[str, Any], float]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.save_grid_search_best_params", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "save_grid_search_best_params", "kind": "function", "doc": "\n", "signature": "(best_kwargs, best_msa, grid_search_result_dir=None):", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_grid_search_and_inference", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_grid_search_and_inference", "kind": "function", "doc": "The best parameter setting.\n The evaluation score for the best setting.
\n
Run grid search and inference for automatic mask generation.
\n\nPlease refer to the documentation of run_instance_segmentation_grid_search
\nfor details on how to specify the grid search parameters.
generate
function.generate
method of the segmenter.Inference and evaluation for the LIVECell dataset and\nthe different cell lines contained in it.
\n"}, {"fullname": "micro_sam.evaluation.livecell.CELL_TYPES", "modulename": "micro_sam.evaluation.livecell", "qualname": "CELL_TYPES", "kind": "variable", "doc": "\n", "default_value": "['A172', 'BT474', 'BV2', 'Huh7', 'MCF7', 'SHSY5Y', 'SkBr3', 'SKOV3']"}, {"fullname": "micro_sam.evaluation.livecell.livecell_inference", "modulename": "micro_sam.evaluation.livecell", "qualname": "livecell_inference", "kind": "function", "doc": "Run inference for livecell with a fixed prompt setting.
\n\nRun precomputation of val and test image embeddings for livecell.
\n\nRun inference on livecell with iterative prompting setting.
\n\nRun automatic mask generation grid-search and inference for livecell.
\n\npred_iou_thresh
used in the gridsearch.\nBy default values in the range from 0.6 to 0.9 with a stepsize of 0.025 will be used.stability_score_thresh
used in the gridsearch.\nBy default values in the range from 0.6 to 0.9 with a stepsize of 0.025 will be used.\n\n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None,\tverbose_gs: bool = False,\tn_val_per_cell_type: int = 25) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_instance_segmentation_with_decoder", "kind": "function", "doc": "The path where the predicted images are stored.
\n
Run automatic mask generation grid-search and inference for livecell.
\n\ncenter_distance_threshold
used in the gridsearch.\nBy default values in the range from 0.3 to 0.7 with a stepsize of 0.1 will be used.boundary_distance_threshold
used in the gridsearch.\nBy default values in the range from 0.3 to 0.7 with a stepsize of 0.1 will be used.distance_smoothing
used in the gridsearch.\nBy default values in the range from 1.0 to 2.0 with a stepsize of 0.1 will be used.min_size
used in the gridsearch.\nBy default the values 50, 100 and 200 are used.\n\n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tcenter_distance_threshold_values: Optional[List[float]] = None,\tboundary_distance_threshold_values: Optional[List[float]] = None,\tdistance_smoothing_values: Optional[List[float]] = None,\tmin_size_values: Optional[List[float]] = None,\tverbose_gs: bool = False,\tn_val_per_cell_type: int = 25) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_inference", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_inference", "kind": "function", "doc": "The path where the predicted images are stored.
\n
Run LIVECell inference with command line tool.
\n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_evaluation", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_evaluation", "kind": "function", "doc": "Run LiveCELL evaluation with command line tool.
\n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "kind": "module", "doc": "Functionality for qualitative comparison of Segment Anything models on microscopy data.
\n"}, {"fullname": "micro_sam.evaluation.model_comparison.generate_data_for_model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "qualname": "generate_data_for_model_comparison", "kind": "function", "doc": "Generate samples for qualitative model comparison.
\n\nThis precomputes the input for model_comparison
and model_comparison_with_napari
.
micro_sam.util.get_sam_model
.micro_sam.util.get_sam_model
.Create images for a qualitative model comparision.
\n\ngenerate_data_for_model_comparison
.Use napari to display the qualtiative comparison results for two models.
\n\ngenerate_data_for_model_comparison
.Default grid-search parameters for multi-dimensional prompt-based instance segmentation.
\n\niou_threshold
used in the grid-search.\nBy default values in the range from 0.5 to 0.9 with a stepsize of 0.1 will be used.projection
method used in the grid-search.\nBy default the values mask
, bounding_box
and points
are used.box_extension
used in the grid-search.\nBy default values in the range from 0 to 0.25 with a stepsize of 0.025 will be used.\n\n", "signature": "(\tiou_threshold_values: Optional[List[float]] = None,\tprojection_method_values: Union[str, dict, NoneType] = None,\tbox_extension_values: Union[float, int, NoneType] = None) -> Dict[str, List]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation.segment_slices_from_ground_truth", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "qualname": "segment_slices_from_ground_truth", "kind": "function", "doc": "The values for grid search.
\n
Segment all objects in a volume by prompt-based segmentation in one slice per object.
\n\nThis function first segments each object in the respective specified slice using interactive\n(prompt-based) segmentation functionality. Then it segments the particular object in the\nremaining slices in the volume.
\n\nRun grid search for prompt-based multi-dimensional instance segmentation.
\n\nThe parameters and their respective value ranges for the grid search are specified via the\ngrid_search_values
argument. For example, to run a grid search over the parameters iou_threshold
,\nprojection
and box_extension
, you can pass the following:
grid_search_values = {\n \"iou_threshold\": [0.5, 0.6, 0.7, 0.8, 0.9],\n \"projection\": [\"mask\", \"bounding_box\", \"points\"],\n \"box_extension\": [0, 0.1, 0.2, 0.3, 0.4, 0,5],\n}\n
\n\nAll combinations of the parameters will be checked.\nIf passed None, the function default_grid_search_values_multi_dimensional_segmentation
is used\nto get the default grid search parameters for the instance segmentation method.
segment_slices_from_ground_truth
function.Run batched inference for input prompts.
\n\n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage: numpy.ndarray,\tbatch_size: int,\tboxes: Optional[numpy.ndarray] = None,\tpoints: Optional[numpy.ndarray] = None,\tpoint_labels: Optional[numpy.ndarray] = None,\tmultimasking: bool = False,\tembedding_path: Union[str, os.PathLike, NoneType] = None,\treturn_instance_segmentation: bool = True,\tsegmentation_ids: Optional[list] = None,\treduce_multimasking: bool = True,\tlogits_masks: Optional[torch.Tensor] = None):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation", "modulename": "micro_sam.instance_segmentation", "kind": "module", "doc": "The predicted segmentation masks.
\n
Automated instance segmentation functionality.\nThe classes implemented here extend the automatic instance segmentation from Segment Anything:\nhttps://computational-cell-analytics.github.io/micro-sam/micro_sam.html
\n"}, {"fullname": "micro_sam.instance_segmentation.mask_data_to_segmentation", "modulename": "micro_sam.instance_segmentation", "qualname": "mask_data_to_segmentation", "kind": "function", "doc": "Convert the output of the automatic mask generation to an instance segmentation.
\n\n\n\n", "signature": "(\tmasks: List[Dict[str, Any]],\twith_background: bool,\tmin_object_size: int = 0,\tmax_object_size: Optional[int] = None) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase", "kind": "class", "doc": "The instance segmentation.
\n
Base class for the automatic mask generators.
\n", "bases": "abc.ABC"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.is_initialized", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.is_initialized", "kind": "variable", "doc": "Whether the mask generator has already been initialized.
\n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.crop_list", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.crop_list", "kind": "variable", "doc": "The list of mask data after initialization.
\n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.crop_boxes", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.crop_boxes", "kind": "variable", "doc": "The list of crop boxes.
\n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.original_size", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.original_size", "kind": "variable", "doc": "The original image size.
\n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.get_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.get_state", "kind": "function", "doc": "Get the initialized state of the mask generator.
\n\n\n\n", "signature": "(self) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.set_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.set_state", "kind": "function", "doc": "State of the mask generator.
\n
Set the state of the mask generator.
\n\nClear the state of the mask generator.
\n", "signature": "(self):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator", "kind": "class", "doc": "Generates an instance segmentation without prompts, using a point grid.
\n\nThis class implements the same logic as\nhttps://github.com/facebookresearch/segment-anything/blob/main/segment_anything/automatic_mask_generator.py\nIt decouples the computationally expensive steps of generating masks from the cheap post-processing operation\nto filter these masks to enable grid search and interactively changing the post-processing.
\n\nUse this class as follows:
\n\namg = AutomaticMaskGenerator(predictor)\namg.initialize(image) # Initialize the masks, this takes care of all expensive computations.\nmasks = amg.generate(pred_iou_thresh=0.8) # Generate the masks. This is fast and enables testing parameters\n
\npoint_grids
must provide explicit point sampling.Initialize image embeddings and masks for an image.
\n\nutil.precompute_image_embeddings
for details.image
has three spatial dimensions\nor a time dimension and two spatial dimensions.Generate instance segmentation for the currently initialized image.
\n\n\n\n", "signature": "(\tself,\tpred_iou_thresh: float = 0.88,\tstability_score_thresh: float = 0.95,\tbox_nms_thresh: float = 0.7,\tcrop_nms_thresh: float = 0.7,\tmin_mask_region_area: int = 0,\toutput_mode: str = 'binary_mask') -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.TiledAutomaticMaskGenerator", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledAutomaticMaskGenerator", "kind": "class", "doc": "The instance segmentation masks.
\n
Generates an instance segmentation without prompts, using a point grid.
\n\nImplements the same functionality as AutomaticMaskGenerator
but for tiled embeddings.
point_grids
must provide explicit point sampling.Initialize image embeddings and masks for an image.
\n\nutil.precompute_image_embeddings
for details.image
has three spatial dimensions\nor a time dimension and two spatial dimensions.Adapter to contain the UNETR decoder in a single module.
\n\nTo apply the decoder on top of pre-computed embeddings for\nthe segmentation functionality.\nSee also: https://github.com/constantinpape/torch-em/blob/main/torch_em/model/unetr.py
\n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.__init__", "kind": "function", "doc": "Initializes internal Module state, shared by both nn.Module and ScriptModule.
\n", "signature": "(unetr)"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.base", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.base", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.out_conv", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.out_conv", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv_out", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv_out", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.decoder_head", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.decoder_head", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.final_activation", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.final_activation", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.postprocess_masks", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.postprocess_masks", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.decoder", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv1", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv1", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv2", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv2", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv3", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv3", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv4", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv4", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.forward", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.forward", "kind": "function", "doc": "Defines the computation performed at every call.
\n\nShould be overridden by all subclasses.
\n\nAlthough the recipe for forward pass needs to be defined within\nthis function, one should call the Module
instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.
Get UNETR model for automatic instance segmentation.
\n\n\n\n", "signature": "(\timage_encoder: torch.nn.modules.module.Module,\tdecoder_state: Optional[collections.OrderedDict[str, torch.Tensor]] = None,\tdevice: Union[str, torch.device, NoneType] = None) -> torch.nn.modules.module.Module:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "get_decoder", "kind": "function", "doc": "The UNETR model.
\n
Get decoder to predict outputs for automatic instance segmentation
\n\n\n\n", "signature": "(\timage_encoder: torch.nn.modules.module.Module,\tdecoder_state: collections.OrderedDict[str, torch.Tensor],\tdevice: Union[str, torch.device, NoneType] = None) -> micro_sam.instance_segmentation.DecoderAdapter:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_predictor_and_decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "get_predictor_and_decoder", "kind": "function", "doc": "The decoder for instance segmentation.
\n
Load the SAM model (predictor) and instance segmentation decoder.
\n\nThis requires a checkpoint that contains the state for both predictor\nand decoder.
\n\n\n\n", "signature": "(\tmodel_type: str,\tcheckpoint_path: Union[str, os.PathLike],\tdevice: Union[str, torch.device, NoneType] = None) -> Tuple[segment_anything.predictor.SamPredictor, micro_sam.instance_segmentation.DecoderAdapter]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder", "kind": "class", "doc": "The SAM predictor.\n The decoder for instance segmentation.
\n
Generates an instance segmentation without prompts, using a decoder.
\n\nImplements the same interface as AutomaticMaskGenerator
.
Use this class as follows:
\n\nsegmenter = InstanceSegmentationWithDecoder(predictor, decoder)\nsegmenter.initialize(image) # Predict the image embeddings and decoder outputs.\nmasks = segmenter.generate(center_distance_threshold=0.75) # Generate the instance segmentation.\n
\nWhether the mask generator has already been initialized.
\n"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.initialize", "kind": "function", "doc": "Initialize image embeddings and decoder predictions for an image.
\n\nutil.precompute_image_embeddings
for details.image
has three spatial dimensions\nor a time dimension and two spatial dimensions.Generate instance segmentation for the currently initialized image.
\n\n\n\n", "signature": "(\tself,\tcenter_distance_threshold: float = 0.5,\tboundary_distance_threshold: float = 0.5,\tforeground_threshold: float = 0.5,\tforeground_smoothing: float = 1.0,\tdistance_smoothing: float = 1.6,\tmin_size: int = 0,\toutput_mode: Optional[str] = 'binary_mask') -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.get_state", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.get_state", "kind": "function", "doc": "The instance segmentation masks.
\n
Get the initialized state of the instance segmenter.
\n\n\n\n", "signature": "(self) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.set_state", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.set_state", "kind": "function", "doc": "Instance segmentation state.
\n
Set the state of the instance segmenter.
\n\nClear the state of the instance segmenter.
\n", "signature": "(self):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.TiledInstanceSegmentationWithDecoder", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledInstanceSegmentationWithDecoder", "kind": "class", "doc": "Same as InstanceSegmentationWithDecoder
but for tiled image embeddings.
Initialize image embeddings and decoder predictions for an image.
\n\nutil.precompute_image_embeddings
for details.image
has three spatial dimensions\nor a time dimension and two spatial dimensions.Get the automatic mask generator class.
\n\n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tis_tiled: bool,\tdecoder: Optional[torch.nn.modules.module.Module] = None,\t**kwargs) -> Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder]:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation", "modulename": "micro_sam.multi_dimensional_segmentation", "kind": "module", "doc": "The automatic mask generator.
\n
Multi-dimensional segmentation with segment anything.
\n"}, {"fullname": "micro_sam.multi_dimensional_segmentation.PROJECTION_MODES", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "PROJECTION_MODES", "kind": "variable", "doc": "\n", "default_value": "('box', 'mask', 'points', 'points_and_mask', 'single_point')"}, {"fullname": "micro_sam.multi_dimensional_segmentation.segment_mask_in_volume", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "segment_mask_in_volume", "kind": "function", "doc": "Segment an object mask in in volumetric data.
\n\n\n\n", "signature": "(\tsegmentation: numpy.ndarray,\tpredictor: segment_anything.predictor.SamPredictor,\timage_embeddings: Dict[str, Any],\tsegmented_slices: numpy.ndarray,\tstop_lower: bool,\tstop_upper: bool,\tiou_threshold: float,\tprojection: Union[str, dict],\tupdate_progress: Optional[<built-in function callable>] = None,\tbox_extension: float = 0.0,\tverbose: bool = False) -> Tuple[numpy.ndarray, Tuple[int, int]]:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation.merge_instance_segmentation_3d", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "merge_instance_segmentation_3d", "kind": "function", "doc": "Array with the volumetric segmentation.\n Tuple with the first and last segmented slice.
\n
Merge stacked 2d instance segmentations into a consistent 3d segmentation.
\n\nSolves a multicut problem based on the overlap of objects to merge across z.
\n\n\n\n", "signature": "(\tslice_segmentation: numpy.ndarray,\tbeta: float = 0.5,\twith_background: bool = True,\tgap_closing: Optional[int] = None,\tmin_z_extent: Optional[int] = None,\tverbose: bool = True,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation.automatic_3d_segmentation", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "automatic_3d_segmentation", "kind": "function", "doc": "The merged segmentation.
\n
Segment volume in 3d.
\n\nFirst segments slices individually in 2d and then merges them across 3d\nbased on overlap of objects between slices.
\n\n\n\n", "signature": "(\tvolume: numpy.ndarray,\tpredictor: segment_anything.predictor.SamPredictor,\tsegmentor: micro_sam.instance_segmentation.AMGBase,\tembedding_path: Union[os.PathLike, str, NoneType] = None,\twith_background: bool = True,\tgap_closing: Optional[int] = None,\tmin_z_extent: Optional[int] = None,\tverbose: bool = True,\t**kwargs) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state", "modulename": "micro_sam.precompute_state", "kind": "module", "doc": "The segmentation.
\n
Precompute image embeddings and automatic mask generator state for image data.
\n"}, {"fullname": "micro_sam.precompute_state.cache_amg_state", "modulename": "micro_sam.precompute_state", "qualname": "cache_amg_state", "kind": "function", "doc": "Compute and cache or load the state for the automatic mask generator.
\n\n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\traw: numpy.ndarray,\timage_embeddings: Dict[str, Any],\tsave_path: Union[str, os.PathLike],\tverbose: bool = True,\ti: Optional[int] = None,\t**kwargs) -> micro_sam.instance_segmentation.AMGBase:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state.cache_is_state", "modulename": "micro_sam.precompute_state", "qualname": "cache_is_state", "kind": "function", "doc": "The automatic mask generator class with the cached state.
\n
Compute and cache or load the state for the automatic mask generator.
\n\n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tdecoder: torch.nn.modules.module.Module,\traw: numpy.ndarray,\timage_embeddings: Dict[str, Any],\tsave_path: Union[str, os.PathLike],\tverbose: bool = True,\ti: Optional[int] = None,\tskip_load: bool = False,\t**kwargs) -> Optional[micro_sam.instance_segmentation.AMGBase]:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state.precompute_state", "modulename": "micro_sam.precompute_state", "qualname": "precompute_state", "kind": "function", "doc": "The instance segmentation class with the cached state.
\n
Precompute the image embeddings and other optional state for the input image(s).
\n\nkey
must be given. In case of a folder\nit can be given to provide a glob pattern to subselect files from the folder.Functions for prompt-based segmentation with Segment Anything.
\n"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_points", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_points", "kind": "function", "doc": "Segmentation from point prompts.
\n\n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tpoints: numpy.ndarray,\tlabels: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\tuse_best_multimask: Optional[bool] = None):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_mask", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_mask", "kind": "function", "doc": "The binary segmentation mask.
\n
Segmentation from a mask prompt.
\n\n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tmask: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tuse_box: bool = True,\tuse_mask: bool = True,\tuse_points: bool = False,\toriginal_size: Optional[Tuple[int, ...]] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\treturn_logits: bool = False,\tbox_extension: float = 0.0,\tbox: Optional[numpy.ndarray] = None,\tpoints: Optional[numpy.ndarray] = None,\tlabels: Optional[numpy.ndarray] = None,\tuse_single_point: bool = False):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_box", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_box", "kind": "function", "doc": "The binary segmentation mask.
\n
Segmentation from a box prompt.
\n\n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tbox: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\tbox_extension: float = 0.0):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_box_and_points", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_box_and_points", "kind": "function", "doc": "The binary segmentation mask.
\n
Segmentation from a box prompt and point prompts.
\n\n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tbox: numpy.ndarray,\tpoints: numpy.ndarray,\tlabels: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_generators", "modulename": "micro_sam.prompt_generators", "kind": "module", "doc": "The binary segmentation mask.
\n
Classes for generating prompts from ground-truth segmentation masks.\nFor training or evaluation of prompt-based segmentation.
\n"}, {"fullname": "micro_sam.prompt_generators.PromptGeneratorBase", "modulename": "micro_sam.prompt_generators", "qualname": "PromptGeneratorBase", "kind": "class", "doc": "PromptGeneratorBase is an interface to implement specific prompt generators.
\n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator", "kind": "class", "doc": "Generate point and/or box prompts from an instance segmentation.
\n\nYou can use this class to derive prompts from an instance segmentation, either for\nevaluation purposes or for training Segment Anything on custom data.\nIn order to use this generator you need to precompute the bounding boxes and center\ncoordiantes of the instance segmentation, using e.g. util.get_centers_and_bounding_boxes
.
Here's an example for how to use this class:
\n\n# Initialize generator for 1 positive and 4 negative point prompts.\nprompt_generator = PointAndBoxPromptGenerator(1, 4, dilation_strength=8)\n\n# Precompute the bounding boxes for the given segmentation\nbounding_boxes, _ = util.get_centers_and_bounding_boxes(segmentation)\n\n# generate point prompts for the objects with ids 1, 2 and 3\nseg_ids = (1, 2, 3)\nobject_mask = np.stack([segmentation == seg_id for seg_id in seg_ids])[:, None]\nthis_bounding_boxes = [bounding_boxes[seg_id] for seg_id in seg_ids]\npoint_coords, point_labels, _, _ = prompt_generator(object_mask, this_bounding_boxes)\n
\nGenerate point prompts from an instance segmentation iteratively.
\n", "bases": "PromptGeneratorBase"}, {"fullname": "micro_sam.sam_annotator", "modulename": "micro_sam.sam_annotator", "kind": "module", "doc": "The interactive annotation tools.
\n"}, {"fullname": "micro_sam.sam_annotator.annotator_2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "kind": "module", "doc": "\n"}, {"fullname": "micro_sam.sam_annotator.annotator_2d.Annotator2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "qualname": "Annotator2d", "kind": "class", "doc": "Base class for micro_sam annotation plugins.
\n\nImplements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.
\n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_2d.Annotator2d.__init__", "modulename": "micro_sam.sam_annotator.annotator_2d", "qualname": "Annotator2d.__init__", "kind": "function", "doc": "Create the annotator GUI.
\n\nStart the 2d annotation tool for a given image.
\n\nNone
then the whole image is passed to Segment Anything.\n\n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tsegmentation_result: Optional[numpy.ndarray] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.annotator_3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "kind": "module", "doc": "\n"}, {"fullname": "micro_sam.sam_annotator.annotator_3d.Annotator3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "qualname": "Annotator3d", "kind": "class", "doc": "The napari viewer, only returned if
\nreturn_viewer=True
.
Base class for micro_sam annotation plugins.
\n\nImplements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.
\n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_3d.Annotator3d.__init__", "modulename": "micro_sam.sam_annotator.annotator_3d", "qualname": "Annotator3d.__init__", "kind": "function", "doc": "Create the annotator GUI.
\n\nStart the 3d annotation tool for a given image volume.
\n\nNone
then the whole image is passed to Segment Anything.\n\n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tsegmentation_result: Optional[numpy.ndarray] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "kind": "module", "doc": "\n"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking.AnnotatorTracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "qualname": "AnnotatorTracking", "kind": "class", "doc": "The napari viewer, only returned if
\nreturn_viewer=True
.
Base class for micro_sam annotation plugins.
\n\nImplements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.
\n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking.AnnotatorTracking.__init__", "modulename": "micro_sam.sam_annotator.annotator_tracking", "qualname": "AnnotatorTracking.__init__", "kind": "function", "doc": "Create the annotator GUI.
\n\nStart the tracking annotation tool fora given timeseries.
\n\nNone
then the whole image is passed to Segment Anything.\n\n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "kind": "module", "doc": "\n"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.image_series_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "image_series_annotator", "kind": "function", "doc": "The napari viewer, only returned if
\nreturn_viewer=True
.
Run the annotation tool for a series of images (supported for both 2d and 3d images).
\n\nNone
then the whole image is passed to Segment Anything.\n\n", "signature": "(\timages: Union[List[Union[os.PathLike, str]], List[numpy.ndarray]],\toutput_folder: str,\tmodel_type: str = 'vit_l',\tembedding_path: Optional[str] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tviewer: Optional[napari.viewer.Viewer] = None,\treturn_viewer: bool = False,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tis_volumetric: bool = False,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.image_folder_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "image_folder_annotator", "kind": "function", "doc": "The napari viewer, only returned if
\nreturn_viewer=True
.
Run the 2d annotation tool for a series of images in a folder.
\n\ninput_folder
.\nBy default all files will be loaded.micro_sam.sam_annotator.image_series_annotator
.\n\n", "signature": "(\tinput_folder: str,\toutput_folder: str,\tpattern: str = '*',\tviewer: Optional[napari.viewer.Viewer] = None,\treturn_viewer: bool = False,\t**kwargs) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator", "kind": "class", "doc": "The napari viewer, only returned if
\nreturn_viewer=True
.
QWidget(parent: typing.Optional[QWidget] = None, flags: Union[Qt.WindowFlags, Qt.WindowType] = Qt.WindowFlags())
\n", "bases": "micro_sam.sam_annotator._widgets._WidgetBase"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator.__init__", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator.__init__", "kind": "function", "doc": "\n", "signature": "(viewer: napari.viewer.Viewer, parent=None)"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator.run_button", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator.run_button", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.sam_annotator.training_ui", "modulename": "micro_sam.sam_annotator.training_ui", "kind": "module", "doc": "\n"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget", "kind": "class", "doc": "QWidget(parent: typing.Optional[QWidget] = None, flags: Union[Qt.WindowFlags, Qt.WindowType] = Qt.WindowFlags())
\n", "bases": "micro_sam.sam_annotator._widgets._WidgetBase"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget.__init__", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget.__init__", "kind": "function", "doc": "\n", "signature": "(parent=None)"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget.run_button", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget.run_button", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.sam_annotator.util", "modulename": "micro_sam.sam_annotator.util", "kind": "module", "doc": "\n"}, {"fullname": "micro_sam.sam_annotator.util.point_layer_to_prompts", "modulename": "micro_sam.sam_annotator.util", "qualname": "point_layer_to_prompts", "kind": "function", "doc": "Extract point prompts for SAM from a napari point layer.
\n\n\n\n", "signature": "(\tlayer: napari.layers.points.points.Points,\ti=None,\ttrack_id=None,\twith_stop_annotation=True) -> Optional[Tuple[numpy.ndarray, numpy.ndarray]]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.shape_layer_to_prompts", "modulename": "micro_sam.sam_annotator.util", "qualname": "shape_layer_to_prompts", "kind": "function", "doc": "The point coordinates for the prompts.\n The labels (positive or negative / 1 or 0) for the prompts.
\n
Extract prompts for SAM from a napari shape layer.
\n\nExtracts the bounding box for 'rectangle' shapes and the bounding box and corresponding mask\nfor 'ellipse' and 'polygon' shapes.
\n\n\n\n", "signature": "(\tlayer: napari.layers.shapes.shapes.Shapes,\tshape: Tuple[int, int],\ti=None,\ttrack_id=None) -> Tuple[List[numpy.ndarray], List[Optional[numpy.ndarray]]]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.prompt_layer_to_state", "modulename": "micro_sam.sam_annotator.util", "qualname": "prompt_layer_to_state", "kind": "function", "doc": "The box prompts.\n The mask prompts.
\n
Get the state of the track from a point layer for a given timeframe.
\n\nOnly relevant for annotator_tracking.
\n\n\n\n", "signature": "(prompt_layer: napari.layers.points.points.Points, i: int) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.prompt_layers_to_state", "modulename": "micro_sam.sam_annotator.util", "qualname": "prompt_layers_to_state", "kind": "function", "doc": "The state of this frame (either \"division\" or \"track\").
\n
Get the state of the track from a point layer and shape layer for a given timeframe.
\n\nOnly relevant for annotator_tracking.
\n\n\n\n", "signature": "(\tpoint_layer: napari.layers.points.points.Points,\tbox_layer: napari.layers.shapes.shapes.Shapes,\ti: int) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data", "modulename": "micro_sam.sample_data", "kind": "module", "doc": "The state of this frame (either \"division\" or \"track\").
\n
Sample microscopy data.
\n\nYou can change the download location for sample data and model weights\nby setting the environment variable: MICROSAM_CACHEDIR
\n\nBy default sample data is downloaded to a folder named 'micro_sam/sample_data'\ninside your default cache directory, eg:\n * Mac: ~/Library/Caches/
Download the sample images for the image series annotator.
\n\n\n\n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_image_series", "modulename": "micro_sam.sample_data", "qualname": "sample_data_image_series", "kind": "function", "doc": "The folder that contains the downloaded data.
\n
Provides image series example image to napari.
\n\nOpens as three separate image layers in napari (one per image in series).\nThe third image in the series has a different size and modality.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_wholeslide_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_wholeslide_example_data", "kind": "function", "doc": "Download the sample data for the 2d annotator.
\n\nThis downloads part of a whole-slide image from the NeurIPS Cell Segmentation Challenge.\nSee https://neurips22-cellseg.grand-challenge.org/ for details on the data.
\n\n\n\n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_wholeslide", "modulename": "micro_sam.sample_data", "qualname": "sample_data_wholeslide", "kind": "function", "doc": "The path of the downloaded image.
\n
Provides wholeslide 2d example image to napari.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_livecell_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_livecell_example_data", "kind": "function", "doc": "Download the sample data for the 2d annotator.
\n\nThis downloads a single image from the LiveCELL dataset.\nSee https://doi.org/10.1038/s41592-021-01249-6 for details on the data.
\n\n\n\n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_livecell", "modulename": "micro_sam.sample_data", "qualname": "sample_data_livecell", "kind": "function", "doc": "The path of the downloaded image.
\n
Provides livecell 2d example image to napari.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_hela_2d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_hela_2d_example_data", "kind": "function", "doc": "Download the sample data for the 2d annotator.
\n\nThis downloads a single image from the HeLa CTC dataset.
\n\n\n\n", "signature": "(save_directory: Union[str, os.PathLike]) -> Union[str, os.PathLike]:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_hela_2d", "modulename": "micro_sam.sample_data", "qualname": "sample_data_hela_2d", "kind": "function", "doc": "The path of the downloaded image.
\n
Provides HeLa 2d example image to napari.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_3d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_3d_example_data", "kind": "function", "doc": "Download the sample data for the 3d annotator.
\n\nThis downloads the Lucchi++ datasets from https://casser.io/connectomics/.\nIt is a dataset for mitochondria segmentation in EM.
\n\n\n\n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_3d", "modulename": "micro_sam.sample_data", "qualname": "sample_data_3d", "kind": "function", "doc": "The folder that contains the downloaded data.
\n
Provides Lucchi++ 3d example image to napari.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_tracking_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_tracking_example_data", "kind": "function", "doc": "Download the sample data for the tracking annotator.
\n\nThis data is the cell tracking challenge dataset DIC-C2DH-HeLa.\nCell tracking challenge webpage: http://data.celltrackingchallenge.net\nHeLa cells on a flat glass\nDr. G. van Cappellen. Erasmus Medical Center, Rotterdam, The Netherlands\nTraining dataset: http://data.celltrackingchallenge.net/training-datasets/DIC-C2DH-HeLa.zip (37 MB)\nChallenge dataset: http://data.celltrackingchallenge.net/challenge-datasets/DIC-C2DH-HeLa.zip (41 MB)
\n\n\n\n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_tracking", "modulename": "micro_sam.sample_data", "qualname": "sample_data_tracking", "kind": "function", "doc": "The folder that contains the downloaded data.
\n
Provides tracking example dataset to napari.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_tracking_segmentation_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_tracking_segmentation_data", "kind": "function", "doc": "Download groundtruth segmentation for the tracking example data.
\n\nThis downloads the groundtruth segmentation for the image data from fetch_tracking_example_data
.
\n\n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_segmentation", "modulename": "micro_sam.sample_data", "qualname": "sample_data_segmentation", "kind": "function", "doc": "The folder that contains the downloaded data.
\n
Provides segmentation example dataset to napari.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.synthetic_data", "modulename": "micro_sam.sample_data", "qualname": "synthetic_data", "kind": "function", "doc": "Create synthetic image data and segmentation for training.
\n", "signature": "(shape, seed=None):", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_nucleus_3d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_nucleus_3d_example_data", "kind": "function", "doc": "Download the sample data for 3d segmentation of nuclei.
\n\nThis data contains a small crop from a volume from the publication\n\"Efficient automatic 3D segmentation of cell nuclei for high-content screening\"\nhttps://doi.org/10.1186/s12859-022-04737-4
\n\n\n\n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.training", "modulename": "micro_sam.training", "kind": "module", "doc": "The path of the downloaded image.
\n
Functionality for training Segment Anything.
\n"}, {"fullname": "micro_sam.training.joint_sam_trainer", "modulename": "micro_sam.training.joint_sam_trainer", "kind": "module", "doc": "\n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer", "kind": "class", "doc": "Trainer class for training the Segment Anything model.
\n\nThis class is derived from torch_em.trainer.DefaultTrainer
.\nCheck out https://github.com/constantinpape/torch-em/blob/main/torch_em/trainer/default_trainer.py\nfor details on its usage and implementation.
micro_sam.training.util.ConvertToSamInputs
can be used here.n_sub_iteration
)Trainer class for training the Segment Anything model.
\n\nThis class is derived from torch_em.trainer.DefaultTrainer
.\nCheck out https://github.com/constantinpape/torch-em/blob/main/torch_em/trainer/default_trainer.py\nfor details on its usage and implementation.
micro_sam.training.util.ConvertToSamInputs
can be used here.n_sub_iteration
)Wrapper to make the SegmentAnything model trainable.
\n\nInitializes internal Module state, shared by both nn.Module and ScriptModule.
\n", "signature": "(\tsam: segment_anything.modeling.sam.Sam,\tdevice: Union[str, torch.device])"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.sam", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.sam", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.device", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.device", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.transform", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.transform", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.preprocess", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.preprocess", "kind": "function", "doc": "Resize, normalize pixel values and pad to a square input.
\n\n\n\n", "signature": "(self, x: torch.Tensor) -> Tuple[torch.Tensor, Tuple[int, int]]:", "funcdef": "def"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.image_embeddings_oft", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.image_embeddings_oft", "kind": "function", "doc": "\n", "signature": "(self, batched_inputs):", "funcdef": "def"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.forward", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.forward", "kind": "function", "doc": "The resized, normalized and padded tensor.\n The shape of the image after resizing.
\n
Forward pass.
\n\n\n\n", "signature": "(\tself,\tbatched_inputs: List[Dict[str, Any]],\timage_embeddings: torch.Tensor,\tmultimask_output: bool = False) -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.training.training", "modulename": "micro_sam.training.training", "kind": "module", "doc": "\n"}, {"fullname": "micro_sam.training.training.FilePath", "modulename": "micro_sam.training.training", "qualname": "FilePath", "kind": "variable", "doc": "\n", "default_value": "typing.Union[str, os.PathLike]"}, {"fullname": "micro_sam.training.training.train_sam", "modulename": "micro_sam.training.training", "qualname": "train_sam", "kind": "function", "doc": "The predicted segmentation masks and iou values.
\n
Run training for a SAM model.
\n\nCreate a PyTorch Dataset for training a SAM model.
\n\n\n\n", "signature": "(\traw_paths: Union[List[Union[os.PathLike, str]], str, os.PathLike],\traw_key: Optional[str],\tlabel_paths: Union[List[Union[os.PathLike, str]], str, os.PathLike],\tlabel_key: Optional[str],\tpatch_shape: Tuple[int],\twith_segmentation_decoder: bool,\twith_channels: bool = False,\tsampler=None,\tn_samples: Optional[int] = None,\tis_train: bool = True,\t**kwargs) -> torch.utils.data.dataset.Dataset:", "funcdef": "def"}, {"fullname": "micro_sam.training.training.default_sam_loader", "modulename": "micro_sam.training.training", "qualname": "default_sam_loader", "kind": "function", "doc": "\n", "signature": "(**kwargs) -> torch.utils.data.dataloader.DataLoader:", "funcdef": "def"}, {"fullname": "micro_sam.training.training.CONFIGURATIONS", "modulename": "micro_sam.training.training", "qualname": "CONFIGURATIONS", "kind": "variable", "doc": "The dataset.
\n
Best training configurations for given hardware resources.
\n", "default_value": "{'Minimal': {'model_type': 'vit_t', 'n_objects_per_batch': 4, 'n_sub_iteration': 4}, 'CPU': {'model_type': 'vit_b', 'n_objects_per_batch': 10}, 'gtx1080': {'model_type': 'vit_t', 'n_objects_per_batch': 5}, 'rtx5000': {'model_type': 'vit_b', 'n_objects_per_batch': 10}, 'V100': {'model_type': 'vit_b'}, 'A100': {'model_type': 'vit_h'}}"}, {"fullname": "micro_sam.training.training.train_sam_for_configuration", "modulename": "micro_sam.training.training", "qualname": "train_sam_for_configuration", "kind": "function", "doc": "Run training for a SAM model with the configuration for a given hardware resource.
\n\nSelects the best training settings for the given configuration.\nThe available configurations are listed in CONFIGURATIONS
.
train_sam
.Identity transformation.
\n\nThis is a helper function to skip data normalization when finetuning SAM.\nData normalization is performed within the model and should thus be skipped as\na preprocessing step in training.
\n", "signature": "(x):", "funcdef": "def"}, {"fullname": "micro_sam.training.util.require_8bit", "modulename": "micro_sam.training.util", "qualname": "require_8bit", "kind": "function", "doc": "Transformation to require 8bit input data range (0-255).
\n", "signature": "(x):", "funcdef": "def"}, {"fullname": "micro_sam.training.util.get_trainable_sam_model", "modulename": "micro_sam.training.util", "qualname": "get_trainable_sam_model", "kind": "function", "doc": "Get the trainable sam model.
\n\ncheckpoint_path
.\n\n", "signature": "(\tmodel_type: str = 'vit_l',\tdevice: Union[str, torch.device, NoneType] = None,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\tfreeze: Optional[List[str]] = None,\treturn_state: bool = False) -> micro_sam.training.trainable_sam.TrainableSAM:", "funcdef": "def"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs", "kind": "class", "doc": "The trainable segment anything model.
\n
Convert outputs of data loader to the expected batched inputs of the SegmentAnything model.
\n\nNone
the prompts will not be resized.Helper functions for downloading Segment Anything models and predicting image embeddings.
\n"}, {"fullname": "micro_sam.util.get_cache_directory", "modulename": "micro_sam.util", "qualname": "get_cache_directory", "kind": "function", "doc": "Get micro-sam cache directory location.
\n\nUsers can set the MICROSAM_CACHEDIR environment variable for a custom cache directory.
\n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.microsam_cachedir", "modulename": "micro_sam.util", "qualname": "microsam_cachedir", "kind": "function", "doc": "Return the micro-sam cache directory.
\n\nReturns the top level cache directory for micro-sam models and sample data.
\n\nEvery time this function is called, we check for any user updates made to\nthe MICROSAM_CACHEDIR os environment variable since the last time.
\n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.models", "modulename": "micro_sam.util", "qualname": "models", "kind": "function", "doc": "Return the segmentation models registry.
\n\nWe recreate the model registry every time this function is called,\nso any user changes to the default micro-sam cache directory location\nare respected.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.util.get_device", "modulename": "micro_sam.util", "qualname": "get_device", "kind": "function", "doc": "Get the torch device.
\n\nIf no device is passed the default device for your system is used.\nElse it will be checked if the device you have passed is supported.
\n\n\n\n", "signature": "(\tdevice: Union[str, torch.device, NoneType] = None) -> Union[str, torch.device]:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_sam_model", "modulename": "micro_sam.util", "qualname": "get_sam_model", "kind": "function", "doc": "The device.
\n
Get the SegmentAnything Predictor.
\n\nThis function will download the required model or load it from the cached weight file.\nThis location of the cache can be changed by setting the environment variable: MICROSAM_CACHEDIR.\nThe name of the requested model can be set via model_type
.\nSee https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models\nfor an overview of the available models
Alternatively this function can also load a model from weights stored in a local filepath.\nThe corresponding file path is given via checkpoint_path
. In this case model_type
\nmust be given as the matching encoder architecture, e.g. \"vit_b\" if the weights are for\na SAM model with vit_b encoder.
By default the models are downloaded to a folder named 'micro_sam/models'\ninside your default cache directory, eg:
\n\nget_model_names
.model_type
. If given, model_type
must match the architecture\ncorresponding to the weight file. E.g. if you use weights for SAM with vit_b encoder\nthen model_type
must be given as \"vit_b\".\n\n", "signature": "(\tmodel_type: str = 'vit_l',\tdevice: Union[str, torch.device, NoneType] = None,\tcheckpoint_path: Union[str, os.PathLike, NoneType] = None,\treturn_sam: bool = False,\treturn_state: bool = False) -> mobile_sam.predictor.SamPredictor:", "funcdef": "def"}, {"fullname": "micro_sam.util.export_custom_sam_model", "modulename": "micro_sam.util", "qualname": "export_custom_sam_model", "kind": "function", "doc": "The segment anything predictor.
\n
Export a finetuned segment anything model to the standard model format.
\n\nThe exported model can be used by the interactive annotation tools in micro_sam.annotator
.
Compute the image embeddings (output of the encoder) for the input.
\n\nIf 'save_path' is given the embeddings will be loaded/saved in a zarr container.
\n\n\n\n", "signature": "(\tpredictor: mobile_sam.predictor.SamPredictor,\tinput_: numpy.ndarray,\tsave_path: Union[str, os.PathLike, NoneType] = None,\tlazy_loading: bool = False,\tndim: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = True,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.util.set_precomputed", "modulename": "micro_sam.util", "qualname": "set_precomputed", "kind": "function", "doc": "The image embeddings.
\n
Set the precomputed image embeddings for a predictor.
\n\nprecompute_image_embeddings
.image
has three spatial dimensions\nor a time dimension and two spatial dimensions.\n\n", "signature": "(\tpredictor: mobile_sam.predictor.SamPredictor,\timage_embeddings: Dict[str, Any],\ti: Optional[int] = None,\ttile_id: Optional[int] = None) -> mobile_sam.predictor.SamPredictor:", "funcdef": "def"}, {"fullname": "micro_sam.util.compute_iou", "modulename": "micro_sam.util", "qualname": "compute_iou", "kind": "function", "doc": "The predictor with set features.
\n
Compute the intersection over union of two masks.
\n\n\n\n", "signature": "(mask1: numpy.ndarray, mask2: numpy.ndarray) -> float:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_centers_and_bounding_boxes", "modulename": "micro_sam.util", "qualname": "get_centers_and_bounding_boxes", "kind": "function", "doc": "The intersection over union of the two masks.
\n
Returns the center coordinates of the foreground instances in the ground-truth.
\n\n\n\n", "signature": "(\tsegmentation: numpy.ndarray,\tmode: str = 'v') -> Tuple[Dict[int, numpy.ndarray], Dict[int, tuple]]:", "funcdef": "def"}, {"fullname": "micro_sam.util.load_image_data", "modulename": "micro_sam.util", "qualname": "load_image_data", "kind": "function", "doc": "A dictionary that maps object ids to the corresponding centroid.\n A dictionary that maps object_ids to the corresponding bounding box.
\n
Helper function to load image data from file.
\n\n\n\n", "signature": "(\tpath: str,\tkey: Optional[str] = None,\tlazy_loading: bool = False) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.util.segmentation_to_one_hot", "modulename": "micro_sam.util", "qualname": "segmentation_to_one_hot", "kind": "function", "doc": "The image data.
\n
Convert the segmentation to one-hot encoded masks.
\n\n\n\n", "signature": "(\tsegmentation: numpy.ndarray,\tsegmentation_ids: Optional[numpy.ndarray] = None) -> torch.Tensor:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_block_shape", "modulename": "micro_sam.util", "qualname": "get_block_shape", "kind": "function", "doc": "The one-hot encoded masks.
\n
Get a suitable block shape for chunking a given shape.
\n\nThe primary use for this is determining chunk sizes for\nzarr arrays or block shapes for parallelization.
\n\n\n\n", "signature": "(shape: Tuple[int]) -> Tuple[int]:", "funcdef": "def"}, {"fullname": "micro_sam.visualization", "modulename": "micro_sam.visualization", "kind": "module", "doc": "The block shape.
\n
Functionality for visualizing image embeddings.
\n"}, {"fullname": "micro_sam.visualization.compute_pca", "modulename": "micro_sam.visualization", "qualname": "compute_pca", "kind": "function", "doc": "Compute the pca projection of the embeddings to visualize them as RGB image.
\n\n\n\n", "signature": "(embeddings: numpy.ndarray) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.visualization.project_embeddings_for_visualization", "modulename": "micro_sam.visualization", "qualname": "project_embeddings_for_visualization", "kind": "function", "doc": "PCA of the embeddings, mapped to the pixels.
\n
Project image embeddings to pixel-wise PCA.
\n\n\n\n", "signature": "(\timage_embeddings: Dict[str, Any]) -> Tuple[numpy.ndarray, Tuple[float, ...]]:", "funcdef": "def"}]; + /** pdoc search index */const docs = [{"fullname": "micro_sam", "modulename": "micro_sam", "kind": "module", "doc": "The PCA of the embeddings.\n The scale factor for resizing to the original image size.
\n
Segment Anything for Microscopy implements automatic and interactive annotation for microscopy data. It is built on top of Segment Anything by Meta AI and specializes it for microscopy and other bio-imaging data.\nIts core components are:
\n\nmicro_sam
tools for interactive data annotation, built as napari plugin.micro_sam
library to apply Segment Anything to 2d and 3d data or fine-tune it on your data.micro_sam
models that are fine-tuned on publicly available microscopy data and that are available on BioImage.IO.Based on these components micro_sam
enables fast interactive and automatic annotation for microscopy data, like interactive cell segmentation from bounding boxes:
micro_sam
is now available as stable version 1.0 and we will not change its user interface significantly in the foreseeable future.\nWe are still working on improving and extending its functionality. The current roadmap includes:
If you run into any problems or have questions please open an issue or reach out via image.sc using the tag micro-sam
.
You can install micro_sam
via mamba:
$ mamba install -c conda-forge micro_sam\n
\n\nWe also provide installers for Windows and Linux. For more details on the available installation options check out the installation section.
\n\nAfter installing micro_sam
you can start napari and select the annotation tool you want to use from Plugins->Segment Anything for Microscopy
. Check out the quickstart tutorial video for a short introduction and the annotation tool section for details.
The micro_sam
python library can be imported via
import micro_sam\n
\nIt is explained in more detail here.
\n\nWe provide different finetuned models for microscopy that can be used within our tools or any other tool that supports Segment Anything. See finetuned models for details on the available models.\nYou can also train models on your own data, see here for details.
\n\nIf you are using micro_sam
in your research please cite
vit-tiny
models please also cite Mobile SAM.There are three ways to install micro_sam
:
You can find more information on the installation and how to troubleshoot it in the FAQ section.
\n\nmamba is a drop-in replacement for conda, but much faster.\nWhile the steps below may also work with conda
, we highly recommend using mamba
.\nYou can follow the instructions here to install mamba
.
IMPORTANT: Make sure to avoid installing anything in the base environment.
\n\nmicro_sam
can be installed in an existing environment via:
$ mamba install -c conda-forge micro_sam\n
\n\nor you can create a new environment (here called micro-sam
) with it via:
$ mamba create -c conda-forge -n micro-sam micro_sam\n
\n\nif you want to use the GPU you need to install PyTorch from the pytorch
channel instead of conda-forge
. For example:
$ mamba create -c pytorch -c nvidia -c conda-forge micro_sam pytorch pytorch-cuda=12.1\n
\n\nYou may need to change this command to install the correct CUDA version for your system, see https://pytorch.org/ for details.
\n\nTo install micro_sam
from source, we recommend to first set up an environment with the necessary requirements:
To create one of these environments and install micro_sam
into it follow these steps
$ git clone https://github.com/computational-cell-analytics/micro-sam\n
\n\n$ cd micro-sam\n
\n\n$ mamba env create -f <ENV_FILE>.yaml\n
\n\n$ mamba activate sam\n
\n\nmicro_sam
:$ pip install -e .\n
\n\nWe also provide installers for Linux and Windows:
\n\n\n\nThe installers will not enable you to use a GPU, so if you have one then please consider installing micro_sam
via mamba instead. They will also not enable using the python library.
Linux Installer:
\n\nTo use the installer:
\n\n$ chmod +x micro_sam-0.2.0post1-Linux-x86_64.sh
$./micro_sam-0.2.0post1-Linux-x86_64.sh$
\nmicro_sam
during the installation. By default it will be installed in $HOME/micro_sam
.micro_sam
files to the installation directory..../micro_sam/bin/micro_sam.annotator
.\n.../micro_sam/bin
to your PATH
or set a softlink to .../micro_sam/bin/micro_sam.annotator
.Windows Installer:
\n\nJust Me(recommended)
or All Users(requires admin privileges)
.C:\\Users\\<Username>\\micro_sam
for Just Me
installation or in C:\\ProgramData\\micro_sam
for All Users
.\n.\\micro_sam\\Scripts\\micro_sam.annotator.exe
or with the command .\\micro_sam\\Scripts\\micro_sam.annotator.exe
from the Command Prompt.micro_sam
provides applications for fast interactive 2d segmentation, 3d segmentation and tracking.\nSee an example for interactive cell segmentation in phase-contrast microscopy (left), interactive segmentation\nof mitochondria in volume EM (middle) and interactive tracking of cells (right).
\n\n
\n\nThe annotation tools can be started from the napari plugin menu, the command line or from python scripts.\nThey are built as napari plugin and make use of existing napari functionality wherever possible. If you are not familiar with napari yet, start here.\nThe micro_sam
tools mainly use the point layer, shape layer and label layer.
The annotation tools are explained in detail below. We also provide video tutorials.
\n\nThe annotation tools can be started from the napari plugin menu:\n
\n\nYou can find additional information on the annotation tools in the FAQ section.
\n\nThe 2d annotator can be started by
\n\nAnnotator 2d
in the plugin menu.$ micro_sam.annotator_2d
in the command line.micro_sam.sam_annotator.annotator_2d
in a python script. Check out examples/annotator_2d.py for details. The user interface of the 2d annotator looks like this:
\n\n\n\nIt contains the following elements:
\n\nprompts
: shape layer that is used to provide box prompts to SegmentAnything. Annotations can be given as rectangle (box prompt in the image), ellipse or polygon.point_prompts
: point layer that is used to provide point prompts to SegmentAnything. Positive prompts (green points) for marking the object you want to segment, negative prompts (red points) for marking the outside of the object.committed_objects
: label layer with the objects that have already been segmented.auto_segmentation
: label layer with the results from automatic instance segmentation.current_object
: label layer for the object(s) you're currently segmenting.Embedding Settings
contain advanced settings for loading cached embeddings from file or using tiled embeddings.T
.Segment Object
(or pressing S
) will run segmentation for the current prompts. The result is displayed in current_object
. Activating batched
enables segmentation of multiple objects with point prompts. In this case an object will be segmented per positive prompt.Automatic Segmentation
will segment all objects n the image. The results will be displayed in the auto_segmentation
layer. We support two different methods for automatic segmentation: automatic mask generation (supported for all models) and instance segmentation with an additional decoder (only supported for our models).\nChanging the parameters under Automatic Segmentation Settings
controls the segmentation results, check the tooltips for details.Commit
(or pressing C
) the result from the selected layer (either current_object
or auto_segmentation
) will be transferred from the respective layer to committed_objects
.\nWhen commit_path
is given the results will automatically be saved there.Clear Annotations
(or pressing Shift + C
) will clear the current annotations and the current segmentation.Note that point prompts and box prompts can be combined. When you're using point prompts you can only segment one object at a time, unless the batched
mode is activated. With box prompts you can segment several objects at once, both in the normal and batched
mode.
Check out this video for a tutorial for this tool.
\n\nThe 3d annotator can be started by
\n\nAnnotator 3d
in the plugin menu.$ micro_sam.annotator_3d
in the command line.micro_sam.sam_annotator.annotator_3d
in a python script. Check out examples/annotator_3d.py for details.The user interface of the 3d annotator looks like this:
\n\n\n\nMost elements are the same as in the 2d annotator:
\n\nSegment All Slices
(or Shift + S
) will extend the segmentation for the current object across the volume by projecting prompts across slices. The parameters for prompt projection can be set in Segmentation Settings
, please refer to the tooltips for details.Apply to Volume
needs to be checked, otherwise only the current slice will be segmented. Note that 3D segmentation can take quite long without a GPU.all slices
is set all annotations will be cleared, otherwise they are only cleared for the current slice.Note that you can only segment one object at a time using the interactive segmentation functionality with this tool.
\n\nCheck out this video for a tutorial for the 3d annotation tool.
\n\nThe tracking annotator can be started by
\n\nAnnotator Tracking
in the plugin menu.$ micro_sam.annotator_tracking
in the command line.micro_sam.sam_annotator.annotator_tracking
in a python script. Check out examples/annotator_tracking.py for details. The user interface of the tracking annotator looks like this:
\n\n\n\nMost elements are the same as in the 2d annotator:
\n\nauto_segmentation
layer.track_state
is used to indicate that the object you are tracking is dividing in the current frame. track_id
is used to select which of the tracks after division you are following.Track Object
(or press Shift + S
) to segment the current object across time.Note that the tracking annotator only supports 2d image data, volumetric data is not supported. We also do not support automatic tracking yet.
\n\nCheck out this video for a tutorial for how to use the tracking annotation tool.
\n\nThe image series annotation tool enables running the 2d annotator or 2d annotator for multiple images that are saved within an folder. This makes it convenient to annotate many images without having to close the tool. It can be started by
\n\nImage Series Annotator
in the plugin menu.$ micro_sam.image_series_annotator
in the command line.micro_sam.sam_annotator.image_series_annotator
in a python script. Check out examples/image_series_annotator.py for details. When starting this tool via the plugin menu the following interface opens:
\n\n\n\nYou can select the folder where your image data is saved with Input Folder
. The annotation results will be saved in Output Folder
.\nYou can specify a rule for loading only a subset of images via pattern
, for example *.tif
to only load tif images. Set is_volumetric
if the data you want to annotate is 3d. The rest of the options are settings for the image embedding computation and are the same as for the embedding menu (see above).\nOnce you click Annotate Images
the images from the folder you have specified will be loaded and the annotation tool is started for them.
This menu will not open if you start the image series annotator from the command line or via python. In this case the input folder and other settings are passed as parameters instead.
\n\nCheck out this video for a tutorial for how to use the image series annotator.
\n\nWe also provide a graphical tool for finetuning models on your own data. It can be started by clicking Finetuning
in the plugin menu.
Note: if you know a bit of python programming we recommend to use a script for model finetuning instead. This will give you more options to configure the training. See these instructions for details.
\n\nWhen starting this tool via the plugin menu the following interface opens:
\n\n\n\nYou can select the image data via Path to images
. We can either load images from a folder or select a single file for training. By providing Image data key
you can either provide a pattern for selecting files from a folder or provide an internal filepath for hdf5, zarr or similar fileformats.
You can select the label data via Path to labels
and Label data key
, following the same logic as for the image data. We expect label masks stored in the same size as the image data for training. You can for example use annotations created with one of the micro_sam
annotation tools for this, they are stored in the correct format!
The Configuration
option allows you to choose the hardware configuration for training. We try to automatically select the correct setting for your system, but it can also be changed. Please refer to the tooltips for the other parameters.
The python library can be imported via
\n\nimport micro_sam\n
\nThis library extends the Segment Anything library and
\n\nmicro_sam.prompt_based_segmentation
.micro_sam.instance_segmentation
.micro_sam.training
.micro_sam.evaluation
.You can import these sub-modules via
\n\nimport micro_sam.prompt_based_segmentation\nimport micro_sam.instance_segmentation\n# etc.\n
\nThis functionality is used to implement the interactive annotation tools in micro_sam.sam_annotator
and can be used as a standalone python library.\nWe provide jupyter notebooks that demonstrate how to use it here. You can find the full library documentation by scrolling to the end of this page.
We reimplement the training logic described in the Segment Anything publication to enable finetuning on custom data.\nWe use this functionality to provide the finetuned microscopy models and it can also be used to train models on your own data.\nIn fact the best results can be expected when finetuning on your own data, and we found that it does not require much annotated training data to get siginficant improvements in model performance.\nSo a good strategy is to annotate a few images with one of the provided models using our interactive annotation tools and, if the model is not working as good as required for your use-case, finetune on the annotated data.\n
\n\nThe training logic is implemented in micro_sam.training
and is based on torch-em. Check out the finetuning notebook to see how to use it.
We also support training an additional decoder for automatic instance segmentation. This yields better results than the automatic mask generation of segment anything and is significantly faster.\nThe notebook explains how to activate training it together with the rest of SAM and how to then use it.
\n\nMore advanced examples, including quantitative and qualitative evaluation, of finetuned models can be found in finetuning, which contains the code for training and evaluating our models. You can find further information on model training in the FAQ section.
\n\nIn addition to the original Segment Anything models, we provide models that are finetuned on microscopy data.\nThe additional models are available in the bioimage.io modelzoo and are also hosted on zenodo.
\n\nWe currently offer the following models:
\n\nvit_h
: Default Segment Anything model with vit-h backbone.vit_l
: Default Segment Anything model with vit-l backbone.vit_b
: Default Segment Anything model with vit-b backbone.vit_t
: Segment Anything model with vit-tiny backbone. From the Mobile SAM publication. vit_l_lm
: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-l backbone. (zenodo, bioimage.io)vit_b_lm
: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-b backbone. (zenodo, diplomatic-bug on bioimage.io)vit_t_lm
: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-t backbone. (zenodo, bioimage.io)vit_l_em_organelles
: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with vit-l backbone. (zenodo, bioimage.io)vit_b_em_organelles
: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with vit-b backbone. (zenodo, bioimage.io)vit_t_em_organelles
: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with vit-t backbone. (zenodo, bioimage.io)See the two figures below of the improvements through the finetuned model for LM and EM data.
\n\n\n\n\n\nYou can select which model to use for annotation by selecting the corresponding name in the embedding menu:
\n\n\n\nTo use a specific model in the python library you need to pass the corresponding name as value to the model_type
parameter exposed by all relevant functions.\nSee for example the 2d annotator example.
As a rule of thumb:
\n\nvit_l_lm
or vit_b_lm
model for segmenting cells or nuclei in light microscopy. The larger model (vit_l_lm
) yields a bit better segmentation quality, especially for automatic segmentation, but needs more computational resources.vit_l_em_organelles
or vit_b_em_organelles
models for segmenting mitochondria, nuclei or other roundish organelles in electron microscopy.vit_t_...
models run much faster than other models, but yield inferior quality for many applications. It can still make sense to try them for your use-case if your working on a laptop and want to annotate many images or volumetric data. See also the figures above for examples where the finetuned models work better than the default models.\nWe are working on further improving these models and adding new models for other biomedical imaging domains.
\n\nPrevious versions of our models are available on zenodo:
\n\nWe do not recommend to use these models since our new models improve upon them significantly. But we provide the links here in case they are needed to reproduce older segmentation workflows.
\n\nHere we provide frequently asked questions and common issues.\nIf you encounter a problem or question not addressed here feel free to open an issue or to ask your question on image.sc with the tag micro-sam
.
micro_sam
?The installation for micro_sam
is supported in three ways: from mamba (recommended), from source and from installers. Check out our tutorial video to get started with micro_sam
, briefly walking you through the installation process and how to start the tool.
micro_sam
using the installer, I am getting some errors.The installer should work out-of-the-box on Windows and Linux platforms. Please open an issue to report the error you encounter.
\n\n\n\n\nNOTE: The installers enable using
\nmicro_sam
without mamba or conda. However, we recommend the installation from mamba / from source to use all its features seamlessly. Specifically, the installers currently only support the CPU and won't enable you to use the GPU (if you have one).
micro_sam
?From our experience, the micro_sam
annotation tools work seamlessly on most laptop or workstation CPUs and with > 8GB RAM.\nYou might encounter some slowness for $leq$ 8GB RAM. The resources micro_sam
's annotation tools have been tested on are:
Having a GPU will significantly speed up the annotation tools and especially the model finetuning.
\n\nmicro_sam
has been tested mostly with CUDA 12.1 and PyTorch [2.1.1, 2.2.0]. However, the tool and the library is not constrained to a specific PyTorch or CUDA version. So it should work fine with the standard PyTorch installation for your system.
ModuleNotFoundError: No module named 'elf.io
). What should I do?With the latest release 1.0.0, the installation from mamba and source should take care of this and install all the relevant packages for you.\nSo please reinstall micro_sam
.
micro_sam
using pip?The installation is not supported via pip.
\n\nimportError: cannot import name 'UNETR' from 'torch_em.model'
.It's possible that you have an older version of torch-em
installed. Similar errors could often be raised from other libraries, the reasons being: a) Outdated packages installed, or b) Some non-existent module being called. If the source of such error is from micro_sam
, then a)
is most likely the reason . We recommend installing the latest version following the installation instructions.
Yes, you can use the annotator tool for:
\n\nmicro_sam
models on your own microscopy data, in case the provided models do not suffice your needs. One caveat: You need to annotate a few objects before-hand (micro_sam
has the potential of improving interactive segmentation with only a few annotated objects) to proceed with the supervised finetuning procedure.We currently provide three different kind of models: the default models vit_h
, vit_l
, vit_b
and vit_t
; the models for light microscopy vit_l_lm
, vit_b_lm
and vit_t_lm
; the models for electron microscopy vit_l_em_organelles
, vit_b_em_organelles
and vit_t_em_organelles
.\nYou should first try the model that best fits the segmentation task your interested in, a lm
model for cell or nucleus segmentation in light microscopy or a em_organelles
model for segmenting nuclei, mitochondria or other roundish organelles in electron microscopy.\nIf your segmentation problem does not meet these descriptions, or if these models don't work well, you should try one of the default models instead.\nThe letter after vit
denotes the size of the image encoder in SAM, h
(huge) being the largest and t
(tiny) the smallest. The smaller models are faster but may yield worse results. We recommend to either use a vit_l
or vit_b
model, they offer the best trade-off between speed and segmentation quality.\nYou can find more information on model choice here.
The Segment Anything model expects inputs of shape 1024 x 1024 pixels. Inputs that do not match this size will be internally resized to match it. Hence, applying Segment Anything to a much larger image will often lead to inferior results, or somethimes not work at all. To address this, micro_sam
implements tiling: cutting up the input image into tiles of a fixed size (with a fixed overlap) and running Segment Anything for the individual tiles. You can activate tiling with the tile_shape
parameter, which determines the size of the inner tile and halo
, which determines the size of the additional overlap.
micro_sam
annotation tools, you can specify the values for the tile_shape
and halo
via the tile_x
, tile_y
, halo_x
and halo_y
parameters in the Embedding Settings
drop-down menu.micro_sam
library in a python script, you can pass them as tuples, e.g. tile_shape=(1024, 1024), halo=(256, 256)
. See also the wholeslide annotator example.--tile_shape 1024 1024 --halo 256 256
.\n\n\nNOTE: It's recommended to choose the
\nhalo
so that it is larger than half of the maximal radius of the objects you want to segment.
micro_sam
pre-computes the image embeddings produced by the vision transformer backbone in Segment Anything, and (optionally) store them on disc. I fyou are using a CPU, this step can take a while for 3d data or time-series (you will see a progress bar in the command-line interface / on the bootom right of napari). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them over to your laptop / local machine to speed this up.
micro_sam.precompute_embeddings
for this (it is installed with the rest of the software). You can specify the location of the precomputed embeddings via the embedding_path
argument.embeddings_save_path
option in the Embedding Settings
drop-down. You can later load the precomputed image embeddings by entering the path to the stored embeddings there as well.micro_sam
on a CPU?Most other processing steps that are very fast even on a CPU, the automatic segmentation step for the default Segment Anything models (typically called as the \"Segment Anything\" feature or AMG - Automatic Mask Generation) takes several minutes without a GPU (depending on the image size). For large volumes and time-series, segmenting an object interactively in 3d / tracking across time can take a couple of seconds with a CPU (it is very fast with a GPU).
\n\n\n\n\nHINT: All the tutorial videos have been created on CPU resources.
\n
micro_sam
?You can save and load the results from the committed_objects
layer to correct segmentations you obtained from another tool (e.g. CellPose) or save intermediate annotation results. The results can be saved via File
-> Save Selected Layers (s) ...
in the napari menu-bar on top (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the segmentation_result
parameter in the CLI or python script (2d and 3d segmentation).\nIf you are using an annotation tool you can load the segmentation you want to edit as segmentation layer and renae it to committed_objects
.
micro_sam
for segmenting objects. I would like to report the steps for reproducability. How can this be done?The annotation steps and segmentation results can be saved to a zarr file by providing the commit_path
in the commit
widget. This file will contain all relevant information to reproduce the segmentation.
\n\n\nNOTE: This feature is still under development and we have not implemented rerunning the segmentation from this file yet. See this issue for details.
\n
micro_sam
generalist models do not work for my data. What should I do?micro_sam
supports interactive annotation using positive and negative point prompts, box prompts and polygon drawing. You can combine multiple types of prompts to improve the segmentation quality. In case the aforementioned suggestions do not work as desired, micro_sam
also supports finetuning a model on your data (see the next section). We recommend the following: a) Check which of the provided models performs relatively good on your data, b) Choose the best model as the starting point to train your own specialist model for the desired segmentation task.
While emmitting signal ... an error ocurred in callback ... This is not a bug in psygnal. See ... above for details.
These messages occur when an internal error happens in micro_sam
. In most cases this is due to inconsistent annotations and you can fix them by clearing the annotations.\nWe want to remove these errors, so we would be very grateful if you can open an issue and describe the steps you did when encountering it.
The first thing to check is: a) make sure you are using the latest version of micro_sam
(pull the latest commit from master if your installation is from source, or update the installation from conda / mamba using mamba update micro_sam
), and b) try out the steps from the 3d annotator tutorial video to verify if this shows the same behaviour (or the same errors) as you faced. For 3d images, it's important to pass the inputs in the python axis convention, ZYX.\nc) try using a different model and change the projection mode for 3d segmentation. This is also explained in the video.
micro_sam
to annotate them?Segment Anything does not work well for very small or fine-grained objects (e.g. filaments). In these cases, you could try to use tiling to improve results (see Point 3 above for details).
\n\nEditing (drawing / erasing) very large 2d images or 3d volumes is known to be slow at the moment, as the objects in the layers are stored in-memory. See the related issue.
\n\n\"napari\" is not responding.
pops up.This can happen for long running computations. You just need to wait a bit longer and the computation will finish.
\n\nYes, you can fine-tune Segment Anything on your own dataset. Here's how you can do it:
\n\nmicro_sam.training
library.Yes, you can fine-tune Segment Anything on your custom datasets on Kaggle (and BAND). Check out our tutorial notebook for this.
\n\nTODO: explain instance segmentation labels, that you can get them by annotation with micro_sam, and dense vs. sparse annotation (for training without / with decoder)
\n\nYou can load your finetuned model by entering the path to its checkpoint in the custom_weights_path
field in the Embedding Settings
drop-down menu.\nIf you are using the python library or CLI you can specify this path with the checkpoint_path
parameter.
micro_sam
?micro_sam
introduces a new segmentation decoder to the Segment Anything backbone, for enabling faster and accurate automatic instance segmentation, by predicting the distances to the object center and boundary as well as predicting foregrund, and performing seeded watershed-based postprocessing to obtain the instances.\n
Finetuning Segment Anything is possible in most consumer-grade GPU and CPU resources (but training being a lot slower on the CPU). For the mentioned resource, it should be possible to finetune a ViT Base (also abbreviated as vit_b
) by reducing the number of objects per image to 15.\nThis parameter has the biggest impact on the VRAM consumption and quality of the finetuned model.\nYou can find an overview of the resources we have tested for finetuning here.\nWe also provide a the convenience function micro_sam.training.train_sam_for_configuration
that selects the best training settings for these configuration. This function is also used by the finetuning UI.
Thanks to torch-em
, a) Creating PyTorch datasets and dataloaders using the python library is convenient and supported for various data formats and data structures.\nSee the tutorial notebook on how to create dataloaders using torch-em
and the documentation for details on creating your own datasets and dataloaders; and b) finetuning using the napari
tool eases the aforementioned process, by allowing you to add the input parameters (path to the directory for inputs and labels etc.) directly in the tool.
\n\n\nNOTE: If you have images with large input shapes with a sparse density of instance segmentations, we recommend using
\nsampler
for choosing the patches with valid segmentation for the finetuning purpose (see the example for PlantSeg (Root) specialist model inmicro_sam
).
TODO: move the content of https://github.com/computational-cell-analytics/micro-sam/blob/master/doc/bioimageio/validation.md here.
\n\nWe welcome new contributions!
\n\nFirst, discuss your idea by opening a new issue in micro-sam.
\n\nThis allows you to ask questions, and have the current developers make suggestions about the best way to implement your ideas.
\n\nYou may also find it helpful to look at this developer guide, which explains the organization of the micro-sam code.
\n\nWe use git for version control.
\n\nClone the repository, and checkout the development branch:
\n\ngit clone https://github.com/computational-cell-analytics/micro-sam.git\ncd micro-sam\ngit checkout dev\n
\n\nWe use conda to manage our environments. If you don't have this already, install miniconda or mamba to get started.
\n\nNow you can create the environment, install user and develoepr dependencies, and micro-sam as an editable installation:
\n\nconda env create environment-gpu.yml\nconda activate sam\npython -m pip install requirements-dev.txt\npython -m pip install -e .\n
\n\nNow it's time to make your code changes.
\n\nTypically, changes are made branching off from the development branch. Checkout dev
and then create a new branch to work on your changes.
git checkout dev\ngit checkout -b my-new-feature\n
\n\nWe use google style python docstrings to create documentation for all new code.
\n\nYou may also find it helpful to look at this developer guide, which explains the organization of the micro-sam code.
\n\nThe tests for micro-sam are run with pytest
\n\nTo run the tests:
\n\npytest\n
\n\nIf you have written new code, you will need to write tests to go with it.
\n\nUnit tests are the preferred style of tests for user contributions. Unit tests check small, isolated parts of the code for correctness. If your code is too complicated to write unit tests easily, you may need to consider breaking it up into smaller functions that are easier to test.
\n\nIn cases where tests must use the napari viewer, these tips might be helpful (in particular, the make_napari_viewer_proxy
fixture).
These kinds of tests should be used only in limited circumstances. Developers are advised to prefer smaller unit tests, and avoid integration tests wherever possible.
\n\nPytest uses the pytest-cov plugin to automatically determine which lines of code are covered by tests.
\n\nA short summary report is printed to the terminal output whenever you run pytest. The full results are also automatically written to a file named coverage.xml
.
The Coverage Gutters VSCode extension is useful for visualizing which parts of the code need better test coverage. PyCharm professional has a similar feature, and you may be able to find similar tools for your preferred editor.
\n\nWe also use codecov.io to display the code coverage results from our Github Actions continuous integration.
\n\nOnce you've made changes to the code and written some tests to go with it, you are ready to open a pull request. You can mark your pull request as a draft if you are still working on it, and still get the benefit of discussing the best approach with maintainers.
\n\nRemember that typically changes to micro-sam are made branching off from the development branch. So, you will need to open your pull request to merge back into the dev
branch like this.
We use pdoc to build the documentation.
\n\nTo build the documentation locally, run this command:
\n\npython build_doc.py\n
\n\nThis will start a local server and display the HTML documentation. Any changes you make to the documentation will be updated in real time (you may need to refresh your browser to see the changes).
\n\nIf you want to save the HTML files, append --out
to the command, like this:
python build_doc.py --out\n
\n\nThis will save the HTML files into a new directory named tmp
.
You can add content to the documentation in two ways:
\n\ndoc
directory.\nmicro_sam/__init__.py
module docstring (eg: .. include:: ../doc/my_amazing_new_docs_page.md
). Otherwise it will not be included in the final documentation build!There are a number of options you can use to benchmark performance, and identify problems like slow run times or high memory use in micro-sam.
\n\n\n\nThere is a performance benchmark script available in the micro-sam repository at development/benchmark.py
.
To run the benchmark script:
\n\npython development/benchmark.py --model_type vit_t --device cpu`\n
\n\nFor more details about the user input arguments for the micro-sam benchmark script, see the help:
\n\npython development/benchmark.py --help\n
\n\nFor more detailed line by line performance results, we can use line-profiler.
\n\n\n\n\nline_profiler is a module for doing line-by-line profiling of functions. kernprof is a convenient script for running either line_profiler or the Python standard library's cProfile or profile modules, depending on what is available.
\n
To do line-by-line profiling:
\n\npython -m pip install line_profiler
@profile
decorator to any function in the call stackkernprof -lv benchmark.py --model_type vit_t --device cpu
For more details about how to use line-profiler and kernprof, see the documentation.
\n\nFor more details about the user input arguments for the micro-sam benchmark script, see the help:
\n\npython development/benchmark.py --help\n
\n\nFor more detailed visualizations of profiling results, we use snakeviz.
\n\n\n\n\nSnakeViz is a browser based graphical viewer for the output of Python\u2019s cProfile module
\n
python -m pip install snakeviz
python -m cProfile -o program.prof benchmark.py --model_type vit_h --device cpu
snakeviz program.prof
For more details about how to use snakeviz, see the documentation.
\n\nIf you need to investigate memory use specifically, we use memray.
\n\n\n\n\nMemray is a memory profiler for Python. It can track memory allocations in Python code, in native extension modules, and in the Python interpreter itself. It can generate several different types of reports to help you analyze the captured memory usage data. While commonly used as a CLI tool, it can also be used as a library to perform more fine-grained profiling tasks.
\n
For more details about how to use memray, see the documentation.
\n\nBAND is a service offered by EMBL Heidelberg that gives access to a virtual desktop for image analysis tasks. It is free to use and micro_sam
is installed there.\nIn order to use BAND and start micro_sam
on it follow these steps:
/scratch/cajal-connectomics/hela-2d-image.png
. Select it via Select image. (see screenshot)\nTo copy data to and from BAND you can use any cloud storage, e.g. ownCloud, dropbox or google drive. For this, it's important to note that copy and paste, which you may need for accessing links on BAND, works a bit different in BAND:
\n\nctrl + c
).ctrl + shift + alt
. This will open a side window where you can paste your text via ctrl + v
.ctrl + c
.ctrl + shift + alt
and paste the text in band via ctrl + v
The video below shows how to copy over a link from owncloud and then download the data on BAND using copy and paste:
\n\n\n"}, {"fullname": "micro_sam.bioimageio", "modulename": "micro_sam.bioimageio", "kind": "module", "doc": "\n"}, {"fullname": "micro_sam.bioimageio.model_export", "modulename": "micro_sam.bioimageio.model_export", "kind": "module", "doc": "\n"}, {"fullname": "micro_sam.bioimageio.model_export.DEFAULTS", "modulename": "micro_sam.bioimageio.model_export", "qualname": "DEFAULTS", "kind": "variable", "doc": "\n", "default_value": "{'authors': [Author(affiliation='University Goettingen', email=None, orcid=None, name='Anwai Archit', github_user='anwai98'), Author(affiliation='University Goettingen', email=None, orcid=None, name='Constantin Pape', github_user='constantinpape')], 'description': 'Finetuned Segment Anything Model for Microscopy', 'cite': [CiteEntry(text='Archit et al. Segment Anything for Microscopy', doi='10.1101/2023.08.21.554208', url=None)], 'tags': ['segment-anything', 'instance-segmentation']}"}, {"fullname": "micro_sam.bioimageio.model_export.export_sam_model", "modulename": "micro_sam.bioimageio.model_export", "qualname": "export_sam_model", "kind": "function", "doc": "Export SAM model to BioImage.IO model format.
\n\nThe exported model can be uploaded to bioimage.io and\nbe used in tools that support the BioImage.IO model format.
\n\nimage
.\nIt is used to derive prompt inputs for the model.Wrapper around the SamPredictor.
\n\nThis model supports the same functionality as SamPredictor and can provide mask segmentations\nfrom box, point or mask input prompts.
\n\nInitializes internal Module state, shared by both nn.Module and ScriptModule.
\n", "signature": "(model_type: str)"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.sam", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.sam", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.load_state_dict", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.load_state_dict", "kind": "function", "doc": "Copies parameters and buffers from state_dict
into\nthis module and its descendants. If strict
is True
, then\nthe keys of state_dict
must exactly match the keys returned\nby this module's ~torch.nn.Module.state_dict()
function.
If assign
is True
the optimizer must be created after\nthe call to load_state_dict
.
state_dict
match the keys returned by this module's\n~torch.nn.Module.state_dict()
function. Default: True
False
, the properties of the tensors in the current\nmodule are preserved while when True
, the properties of the\nTensors in the state dict are preserved.\nDefault: False
\n\n\n\n
NamedTuple
withmissing_keys
andunexpected_keys
fields:\n * missing_keys is a list of str containing the missing keys\n * unexpected_keys is a list of str containing the unexpected keys
\n\n", "signature": "(self, state):", "funcdef": "def"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.forward", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.forward", "kind": "function", "doc": "If a parameter or buffer is registered as
\nNone
and its corresponding key\n exists instate_dict
,load_state_dict()
will raise a\nRuntimeError
.
Returns:
\n", "signature": "(\tself,\timage: torch.Tensor,\tbox_prompts: Optional[torch.Tensor] = None,\tpoint_prompts: Optional[torch.Tensor] = None,\tpoint_labels: Optional[torch.Tensor] = None,\tmask_prompts: Optional[torch.Tensor] = None,\tembeddings: Optional[torch.Tensor] = None) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation", "modulename": "micro_sam.evaluation", "kind": "module", "doc": "Functionality for evaluating Segment Anything models on microscopy data.
\n"}, {"fullname": "micro_sam.evaluation.evaluation", "modulename": "micro_sam.evaluation.evaluation", "kind": "module", "doc": "Evaluation functionality for segmentation predictions from micro_sam.evaluation.automatic_mask_generation
\nand micro_sam.evaluation.inference
.
Run evaluation for instance segmentation predictions.
\n\n\n\n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprediction_paths: List[Union[str, os.PathLike]],\tsave_path: Union[str, os.PathLike, NoneType] = None,\tverbose: bool = True) -> pandas.core.frame.DataFrame:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.evaluation.run_evaluation_for_iterative_prompting", "modulename": "micro_sam.evaluation.evaluation", "qualname": "run_evaluation_for_iterative_prompting", "kind": "function", "doc": "A DataFrame that contains the evaluation results.
\n
Run evaluation for iterative prompt-based segmentation predictions.
\n\n\n\n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprediction_root: Union[os.PathLike, str],\texperiment_folder: Union[os.PathLike, str],\tstart_with_box_prompt: bool = False,\toverwrite_results: bool = False) -> pandas.core.frame.DataFrame:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments", "modulename": "micro_sam.evaluation.experiments", "kind": "module", "doc": "A DataFrame that contains the evaluation results.
\n
Predefined experiment settings for experiments with different prompt strategies.
\n"}, {"fullname": "micro_sam.evaluation.experiments.ExperimentSetting", "modulename": "micro_sam.evaluation.experiments", "qualname": "ExperimentSetting", "kind": "variable", "doc": "\n", "default_value": "typing.Dict"}, {"fullname": "micro_sam.evaluation.experiments.full_experiment_settings", "modulename": "micro_sam.evaluation.experiments", "qualname": "full_experiment_settings", "kind": "function", "doc": "The full experiment settings.
\n\n\n\n", "signature": "(\tuse_boxes: bool = False,\tpositive_range: Optional[List[int]] = None,\tnegative_range: Optional[List[int]] = None) -> List[Dict]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments.default_experiment_settings", "modulename": "micro_sam.evaluation.experiments", "qualname": "default_experiment_settings", "kind": "function", "doc": "The list of experiment settings.
\n
The three default experiment settings.
\n\nFor the default experiments we use a single positive prompt,\ntwo positive and four negative prompts and box prompts.
\n\n\n\n", "signature": "() -> List[Dict]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments.get_experiment_setting_name", "modulename": "micro_sam.evaluation.experiments", "qualname": "get_experiment_setting_name", "kind": "function", "doc": "The list of experiment settings.
\n
Get the name for the given experiment setting.
\n\n\n\n", "signature": "(setting: Dict) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference", "modulename": "micro_sam.evaluation.inference", "kind": "module", "doc": "The name for this experiment setting.
\n
Inference with Segment Anything models and different prompt strategies.
\n"}, {"fullname": "micro_sam.evaluation.inference.precompute_all_embeddings", "modulename": "micro_sam.evaluation.inference", "qualname": "precompute_all_embeddings", "kind": "function", "doc": "Precompute all image embeddings.
\n\nTo enable running different inference tasks in parallel afterwards.
\n\nPrecompute all point prompts.
\n\nTo enable running different inference tasks in parallel afterwards.
\n\nRun segment anything inference for multiple images using prompts derived from groundtruth.
\n\nRun segment anything inference for multiple images using prompts iteratively\n derived from model outputs and groundtruth
\n\nInference and evaluation for the automatic instance segmentation functionality.
\n"}, {"fullname": "micro_sam.evaluation.instance_segmentation.default_grid_search_values_amg", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "default_grid_search_values_amg", "kind": "function", "doc": "Default grid-search parameter for AMG-based instance segmentation.
\n\nReturn grid search values for the two most important parameters:
\n\npred_iou_thresh
, the threshold for keeping objects according to the IoU predicted by the model.stability_score_thresh
, the theshold for keepong objects according to their stability.pred_iou_thresh
used in the gridsearch.\nBy default values in the range from 0.6 to 0.9 with a stepsize of 0.025 will be used.stability_score_thresh
used in the gridsearch.\nBy default values in the range from 0.6 to 0.9 with a stepsize of 0.025 will be used.\n\n", "signature": "(\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None) -> Dict[str, List[float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.default_grid_search_values_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "default_grid_search_values_instance_segmentation_with_decoder", "kind": "function", "doc": "The values for grid search.
\n
Default grid-search parameter for decoder-based instance segmentation.
\n\ncenter_distance_threshold
used in the gridsearch.\nBy default values in the range from 0.3 to 0.7 with a stepsize of 0.1 will be used.boundary_distance_threshold
used in the gridsearch.\nBy default values in the range from 0.3 to 0.7 with a stepsize of 0.1 will be used.distance_smoothing
used in the gridsearch.\nBy default values in the range from 1.0 to 2.0 with a stepsize of 0.1 will be used.min_size
used in the gridsearch.\nBy default the values 50, 100 and 200 are used.\n\n", "signature": "(\tcenter_distance_threshold_values: Optional[List[float]] = None,\tboundary_distance_threshold_values: Optional[List[float]] = None,\tdistance_smoothing_values: Optional[List[float]] = None,\tmin_size_values: Optional[List[float]] = None) -> Dict[str, List[float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_grid_search", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_grid_search", "kind": "function", "doc": "The values for grid search.
\n
Run grid search for automatic mask generation.
\n\nThe parameters and their respective value ranges for the grid search are specified via the\n'grid_search_values' argument. For example, to run a grid search over the parameters 'pred_iou_thresh'\nand 'stability_score_thresh', you can pass the following:
\n\ngrid_search_values = {\n \"pred_iou_thresh\": [0.6, 0.7, 0.8, 0.9],\n \"stability_score_thresh\": [0.6, 0.7, 0.8, 0.9],\n}\n
\n\nAll combinations of the parameters will be checked.
\n\nYou can use the functions default_grid_search_values_instance_segmentation_with_decoder
\nor default_grid_search_values_amg
to get the default grid search parameters for the two\nrespective instance segmentation methods.
generate
function.generate
method of the segmenter.Run inference for automatic mask generation.
\n\ngenerate
method of the segmenter.Evaluate gridsearch results.
\n\n\n\n", "signature": "(\tresult_dir: Union[str, os.PathLike],\tgrid_search_parameters: List[str],\tcriterion: str = 'mSA') -> Tuple[Dict[str, Any], float]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.save_grid_search_best_params", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "save_grid_search_best_params", "kind": "function", "doc": "\n", "signature": "(best_kwargs, best_msa, grid_search_result_dir=None):", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_grid_search_and_inference", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_grid_search_and_inference", "kind": "function", "doc": "The best parameter setting.\n The evaluation score for the best setting.
\n
Run grid search and inference for automatic mask generation.
\n\nPlease refer to the documentation of run_instance_segmentation_grid_search
\nfor details on how to specify the grid search parameters.
generate
function.generate
method of the segmenter.Inference and evaluation for the LIVECell dataset and\nthe different cell lines contained in it.
\n"}, {"fullname": "micro_sam.evaluation.livecell.CELL_TYPES", "modulename": "micro_sam.evaluation.livecell", "qualname": "CELL_TYPES", "kind": "variable", "doc": "\n", "default_value": "['A172', 'BT474', 'BV2', 'Huh7', 'MCF7', 'SHSY5Y', 'SkBr3', 'SKOV3']"}, {"fullname": "micro_sam.evaluation.livecell.livecell_inference", "modulename": "micro_sam.evaluation.livecell", "qualname": "livecell_inference", "kind": "function", "doc": "Run inference for livecell with a fixed prompt setting.
\n\nRun precomputation of val and test image embeddings for livecell.
\n\nRun inference on livecell with iterative prompting setting.
\n\nRun automatic mask generation grid-search and inference for livecell.
\n\npred_iou_thresh
used in the gridsearch.\nBy default values in the range from 0.6 to 0.9 with a stepsize of 0.025 will be used.stability_score_thresh
used in the gridsearch.\nBy default values in the range from 0.6 to 0.9 with a stepsize of 0.025 will be used.\n\n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None,\tverbose_gs: bool = False,\tn_val_per_cell_type: int = 25) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_instance_segmentation_with_decoder", "kind": "function", "doc": "The path where the predicted images are stored.
\n
Run automatic mask generation grid-search and inference for livecell.
\n\ncenter_distance_threshold
used in the gridsearch.\nBy default values in the range from 0.3 to 0.7 with a stepsize of 0.1 will be used.boundary_distance_threshold
used in the gridsearch.\nBy default values in the range from 0.3 to 0.7 with a stepsize of 0.1 will be used.distance_smoothing
used in the gridsearch.\nBy default values in the range from 1.0 to 2.0 with a stepsize of 0.1 will be used.min_size
used in the gridsearch.\nBy default the values 50, 100 and 200 are used.\n\n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tcenter_distance_threshold_values: Optional[List[float]] = None,\tboundary_distance_threshold_values: Optional[List[float]] = None,\tdistance_smoothing_values: Optional[List[float]] = None,\tmin_size_values: Optional[List[float]] = None,\tverbose_gs: bool = False,\tn_val_per_cell_type: int = 25) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_inference", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_inference", "kind": "function", "doc": "The path where the predicted images are stored.
\n
Run LIVECell inference with command line tool.
\n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_evaluation", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_evaluation", "kind": "function", "doc": "Run LiveCELL evaluation with command line tool.
\n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "kind": "module", "doc": "Functionality for qualitative comparison of Segment Anything models on microscopy data.
\n"}, {"fullname": "micro_sam.evaluation.model_comparison.generate_data_for_model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "qualname": "generate_data_for_model_comparison", "kind": "function", "doc": "Generate samples for qualitative model comparison.
\n\nThis precomputes the input for model_comparison
and model_comparison_with_napari
.
micro_sam.util.get_sam_model
.micro_sam.util.get_sam_model
.Create images for a qualitative model comparision.
\n\ngenerate_data_for_model_comparison
.Use napari to display the qualtiative comparison results for two models.
\n\ngenerate_data_for_model_comparison
.Default grid-search parameters for multi-dimensional prompt-based instance segmentation.
\n\niou_threshold
used in the grid-search.\nBy default values in the range from 0.5 to 0.9 with a stepsize of 0.1 will be used.projection
method used in the grid-search.\nBy default the values mask
, bounding_box
and points
are used.box_extension
used in the grid-search.\nBy default values in the range from 0 to 0.25 with a stepsize of 0.025 will be used.\n\n", "signature": "(\tiou_threshold_values: Optional[List[float]] = None,\tprojection_method_values: Union[str, dict, NoneType] = None,\tbox_extension_values: Union[float, int, NoneType] = None) -> Dict[str, List]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation.segment_slices_from_ground_truth", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "qualname": "segment_slices_from_ground_truth", "kind": "function", "doc": "The values for grid search.
\n
Segment all objects in a volume by prompt-based segmentation in one slice per object.
\n\nThis function first segments each object in the respective specified slice using interactive\n(prompt-based) segmentation functionality. Then it segments the particular object in the\nremaining slices in the volume.
\n\nRun grid search for prompt-based multi-dimensional instance segmentation.
\n\nThe parameters and their respective value ranges for the grid search are specified via the\ngrid_search_values
argument. For example, to run a grid search over the parameters iou_threshold
,\nprojection
and box_extension
, you can pass the following:
grid_search_values = {\n \"iou_threshold\": [0.5, 0.6, 0.7, 0.8, 0.9],\n \"projection\": [\"mask\", \"bounding_box\", \"points\"],\n \"box_extension\": [0, 0.1, 0.2, 0.3, 0.4, 0,5],\n}\n
\n\nAll combinations of the parameters will be checked.\nIf passed None, the function default_grid_search_values_multi_dimensional_segmentation
is used\nto get the default grid search parameters for the instance segmentation method.
segment_slices_from_ground_truth
function.Run batched inference for input prompts.
\n\n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage: numpy.ndarray,\tbatch_size: int,\tboxes: Optional[numpy.ndarray] = None,\tpoints: Optional[numpy.ndarray] = None,\tpoint_labels: Optional[numpy.ndarray] = None,\tmultimasking: bool = False,\tembedding_path: Union[str, os.PathLike, NoneType] = None,\treturn_instance_segmentation: bool = True,\tsegmentation_ids: Optional[list] = None,\treduce_multimasking: bool = True,\tlogits_masks: Optional[torch.Tensor] = None):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation", "modulename": "micro_sam.instance_segmentation", "kind": "module", "doc": "The predicted segmentation masks.
\n
Automated instance segmentation functionality.\nThe classes implemented here extend the automatic instance segmentation from Segment Anything:\nhttps://computational-cell-analytics.github.io/micro-sam/micro_sam.html
\n"}, {"fullname": "micro_sam.instance_segmentation.mask_data_to_segmentation", "modulename": "micro_sam.instance_segmentation", "qualname": "mask_data_to_segmentation", "kind": "function", "doc": "Convert the output of the automatic mask generation to an instance segmentation.
\n\n\n\n", "signature": "(\tmasks: List[Dict[str, Any]],\twith_background: bool,\tmin_object_size: int = 0,\tmax_object_size: Optional[int] = None) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase", "kind": "class", "doc": "The instance segmentation.
\n
Base class for the automatic mask generators.
\n", "bases": "abc.ABC"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.is_initialized", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.is_initialized", "kind": "variable", "doc": "Whether the mask generator has already been initialized.
\n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.crop_list", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.crop_list", "kind": "variable", "doc": "The list of mask data after initialization.
\n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.crop_boxes", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.crop_boxes", "kind": "variable", "doc": "The list of crop boxes.
\n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.original_size", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.original_size", "kind": "variable", "doc": "The original image size.
\n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.get_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.get_state", "kind": "function", "doc": "Get the initialized state of the mask generator.
\n\n\n\n", "signature": "(self) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.set_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.set_state", "kind": "function", "doc": "State of the mask generator.
\n
Set the state of the mask generator.
\n\nClear the state of the mask generator.
\n", "signature": "(self):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator", "kind": "class", "doc": "Generates an instance segmentation without prompts, using a point grid.
\n\nThis class implements the same logic as\nhttps://github.com/facebookresearch/segment-anything/blob/main/segment_anything/automatic_mask_generator.py\nIt decouples the computationally expensive steps of generating masks from the cheap post-processing operation\nto filter these masks to enable grid search and interactively changing the post-processing.
\n\nUse this class as follows:
\n\namg = AutomaticMaskGenerator(predictor)\namg.initialize(image) # Initialize the masks, this takes care of all expensive computations.\nmasks = amg.generate(pred_iou_thresh=0.8) # Generate the masks. This is fast and enables testing parameters\n
\npoint_grids
must provide explicit point sampling.Initialize image embeddings and masks for an image.
\n\nutil.precompute_image_embeddings
for details.image
has three spatial dimensions\nor a time dimension and two spatial dimensions.Generate instance segmentation for the currently initialized image.
\n\n\n\n", "signature": "(\tself,\tpred_iou_thresh: float = 0.88,\tstability_score_thresh: float = 0.95,\tbox_nms_thresh: float = 0.7,\tcrop_nms_thresh: float = 0.7,\tmin_mask_region_area: int = 0,\toutput_mode: str = 'binary_mask') -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.TiledAutomaticMaskGenerator", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledAutomaticMaskGenerator", "kind": "class", "doc": "The instance segmentation masks.
\n
Generates an instance segmentation without prompts, using a point grid.
\n\nImplements the same functionality as AutomaticMaskGenerator
but for tiled embeddings.
point_grids
must provide explicit point sampling.Initialize image embeddings and masks for an image.
\n\nutil.precompute_image_embeddings
for details.image
has three spatial dimensions\nor a time dimension and two spatial dimensions.Adapter to contain the UNETR decoder in a single module.
\n\nTo apply the decoder on top of pre-computed embeddings for\nthe segmentation functionality.\nSee also: https://github.com/constantinpape/torch-em/blob/main/torch_em/model/unetr.py
\n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.__init__", "kind": "function", "doc": "Initializes internal Module state, shared by both nn.Module and ScriptModule.
\n", "signature": "(unetr)"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.base", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.base", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.out_conv", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.out_conv", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv_out", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv_out", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.decoder_head", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.decoder_head", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.final_activation", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.final_activation", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.postprocess_masks", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.postprocess_masks", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.decoder", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv1", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv1", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv2", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv2", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv3", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv3", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv4", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv4", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.forward", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.forward", "kind": "function", "doc": "Defines the computation performed at every call.
\n\nShould be overridden by all subclasses.
\n\nAlthough the recipe for forward pass needs to be defined within\nthis function, one should call the Module
instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.
Get UNETR model for automatic instance segmentation.
\n\n\n\n", "signature": "(\timage_encoder: torch.nn.modules.module.Module,\tdecoder_state: Optional[collections.OrderedDict[str, torch.Tensor]] = None,\tdevice: Union[str, torch.device, NoneType] = None) -> torch.nn.modules.module.Module:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "get_decoder", "kind": "function", "doc": "The UNETR model.
\n
Get decoder to predict outputs for automatic instance segmentation
\n\n\n\n", "signature": "(\timage_encoder: torch.nn.modules.module.Module,\tdecoder_state: collections.OrderedDict[str, torch.Tensor],\tdevice: Union[str, torch.device, NoneType] = None) -> micro_sam.instance_segmentation.DecoderAdapter:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_predictor_and_decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "get_predictor_and_decoder", "kind": "function", "doc": "The decoder for instance segmentation.
\n
Load the SAM model (predictor) and instance segmentation decoder.
\n\nThis requires a checkpoint that contains the state for both predictor\nand decoder.
\n\n\n\n", "signature": "(\tmodel_type: str,\tcheckpoint_path: Union[str, os.PathLike],\tdevice: Union[str, torch.device, NoneType] = None) -> Tuple[segment_anything.predictor.SamPredictor, micro_sam.instance_segmentation.DecoderAdapter]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder", "kind": "class", "doc": "The SAM predictor.\n The decoder for instance segmentation.
\n
Generates an instance segmentation without prompts, using a decoder.
\n\nImplements the same interface as AutomaticMaskGenerator
.
Use this class as follows:
\n\nsegmenter = InstanceSegmentationWithDecoder(predictor, decoder)\nsegmenter.initialize(image) # Predict the image embeddings and decoder outputs.\nmasks = segmenter.generate(center_distance_threshold=0.75) # Generate the instance segmentation.\n
\nWhether the mask generator has already been initialized.
\n"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.initialize", "kind": "function", "doc": "Initialize image embeddings and decoder predictions for an image.
\n\nutil.precompute_image_embeddings
for details.image
has three spatial dimensions\nor a time dimension and two spatial dimensions.Generate instance segmentation for the currently initialized image.
\n\n\n\n", "signature": "(\tself,\tcenter_distance_threshold: float = 0.5,\tboundary_distance_threshold: float = 0.5,\tforeground_threshold: float = 0.5,\tforeground_smoothing: float = 1.0,\tdistance_smoothing: float = 1.6,\tmin_size: int = 0,\toutput_mode: Optional[str] = 'binary_mask') -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.get_state", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.get_state", "kind": "function", "doc": "The instance segmentation masks.
\n
Get the initialized state of the instance segmenter.
\n\n\n\n", "signature": "(self) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.set_state", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.set_state", "kind": "function", "doc": "Instance segmentation state.
\n
Set the state of the instance segmenter.
\n\nClear the state of the instance segmenter.
\n", "signature": "(self):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.TiledInstanceSegmentationWithDecoder", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledInstanceSegmentationWithDecoder", "kind": "class", "doc": "Same as InstanceSegmentationWithDecoder
but for tiled image embeddings.
Initialize image embeddings and decoder predictions for an image.
\n\nutil.precompute_image_embeddings
for details.image
has three spatial dimensions\nor a time dimension and two spatial dimensions.Get the automatic mask generator class.
\n\n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tis_tiled: bool,\tdecoder: Optional[torch.nn.modules.module.Module] = None,\t**kwargs) -> Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder]:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation", "modulename": "micro_sam.multi_dimensional_segmentation", "kind": "module", "doc": "The automatic mask generator.
\n
Multi-dimensional segmentation with segment anything.
\n"}, {"fullname": "micro_sam.multi_dimensional_segmentation.PROJECTION_MODES", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "PROJECTION_MODES", "kind": "variable", "doc": "\n", "default_value": "('box', 'mask', 'points', 'points_and_mask', 'single_point')"}, {"fullname": "micro_sam.multi_dimensional_segmentation.segment_mask_in_volume", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "segment_mask_in_volume", "kind": "function", "doc": "Segment an object mask in in volumetric data.
\n\n\n\n", "signature": "(\tsegmentation: numpy.ndarray,\tpredictor: segment_anything.predictor.SamPredictor,\timage_embeddings: Dict[str, Any],\tsegmented_slices: numpy.ndarray,\tstop_lower: bool,\tstop_upper: bool,\tiou_threshold: float,\tprojection: Union[str, dict],\tupdate_progress: Optional[<built-in function callable>] = None,\tbox_extension: float = 0.0,\tverbose: bool = False) -> Tuple[numpy.ndarray, Tuple[int, int]]:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation.merge_instance_segmentation_3d", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "merge_instance_segmentation_3d", "kind": "function", "doc": "Array with the volumetric segmentation.\n Tuple with the first and last segmented slice.
\n
Merge stacked 2d instance segmentations into a consistent 3d segmentation.
\n\nSolves a multicut problem based on the overlap of objects to merge across z.
\n\n\n\n", "signature": "(\tslice_segmentation: numpy.ndarray,\tbeta: float = 0.5,\twith_background: bool = True,\tgap_closing: Optional[int] = None,\tmin_z_extent: Optional[int] = None,\tverbose: bool = True,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation.automatic_3d_segmentation", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "automatic_3d_segmentation", "kind": "function", "doc": "The merged segmentation.
\n
Segment volume in 3d.
\n\nFirst segments slices individually in 2d and then merges them across 3d\nbased on overlap of objects between slices.
\n\n\n\n", "signature": "(\tvolume: numpy.ndarray,\tpredictor: segment_anything.predictor.SamPredictor,\tsegmentor: micro_sam.instance_segmentation.AMGBase,\tembedding_path: Union[os.PathLike, str, NoneType] = None,\twith_background: bool = True,\tgap_closing: Optional[int] = None,\tmin_z_extent: Optional[int] = None,\tverbose: bool = True,\t**kwargs) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state", "modulename": "micro_sam.precompute_state", "kind": "module", "doc": "The segmentation.
\n
Precompute image embeddings and automatic mask generator state for image data.
\n"}, {"fullname": "micro_sam.precompute_state.cache_amg_state", "modulename": "micro_sam.precompute_state", "qualname": "cache_amg_state", "kind": "function", "doc": "Compute and cache or load the state for the automatic mask generator.
\n\n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\traw: numpy.ndarray,\timage_embeddings: Dict[str, Any],\tsave_path: Union[str, os.PathLike],\tverbose: bool = True,\ti: Optional[int] = None,\t**kwargs) -> micro_sam.instance_segmentation.AMGBase:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state.cache_is_state", "modulename": "micro_sam.precompute_state", "qualname": "cache_is_state", "kind": "function", "doc": "The automatic mask generator class with the cached state.
\n
Compute and cache or load the state for the automatic mask generator.
\n\n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tdecoder: torch.nn.modules.module.Module,\traw: numpy.ndarray,\timage_embeddings: Dict[str, Any],\tsave_path: Union[str, os.PathLike],\tverbose: bool = True,\ti: Optional[int] = None,\tskip_load: bool = False,\t**kwargs) -> Optional[micro_sam.instance_segmentation.AMGBase]:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state.precompute_state", "modulename": "micro_sam.precompute_state", "qualname": "precompute_state", "kind": "function", "doc": "The instance segmentation class with the cached state.
\n
Precompute the image embeddings and other optional state for the input image(s).
\n\nkey
must be given. In case of a folder\nit can be given to provide a glob pattern to subselect files from the folder.Functions for prompt-based segmentation with Segment Anything.
\n"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_points", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_points", "kind": "function", "doc": "Segmentation from point prompts.
\n\n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tpoints: numpy.ndarray,\tlabels: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\tuse_best_multimask: Optional[bool] = None):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_mask", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_mask", "kind": "function", "doc": "The binary segmentation mask.
\n
Segmentation from a mask prompt.
\n\n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tmask: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tuse_box: bool = True,\tuse_mask: bool = True,\tuse_points: bool = False,\toriginal_size: Optional[Tuple[int, ...]] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\treturn_logits: bool = False,\tbox_extension: float = 0.0,\tbox: Optional[numpy.ndarray] = None,\tpoints: Optional[numpy.ndarray] = None,\tlabels: Optional[numpy.ndarray] = None,\tuse_single_point: bool = False):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_box", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_box", "kind": "function", "doc": "The binary segmentation mask.
\n
Segmentation from a box prompt.
\n\n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tbox: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\tbox_extension: float = 0.0):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_box_and_points", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_box_and_points", "kind": "function", "doc": "The binary segmentation mask.
\n
Segmentation from a box prompt and point prompts.
\n\n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tbox: numpy.ndarray,\tpoints: numpy.ndarray,\tlabels: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_generators", "modulename": "micro_sam.prompt_generators", "kind": "module", "doc": "The binary segmentation mask.
\n
Classes for generating prompts from ground-truth segmentation masks.\nFor training or evaluation of prompt-based segmentation.
\n"}, {"fullname": "micro_sam.prompt_generators.PromptGeneratorBase", "modulename": "micro_sam.prompt_generators", "qualname": "PromptGeneratorBase", "kind": "class", "doc": "PromptGeneratorBase is an interface to implement specific prompt generators.
\n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator", "kind": "class", "doc": "Generate point and/or box prompts from an instance segmentation.
\n\nYou can use this class to derive prompts from an instance segmentation, either for\nevaluation purposes or for training Segment Anything on custom data.\nIn order to use this generator you need to precompute the bounding boxes and center\ncoordiantes of the instance segmentation, using e.g. util.get_centers_and_bounding_boxes
.
Here's an example for how to use this class:
\n\n# Initialize generator for 1 positive and 4 negative point prompts.\nprompt_generator = PointAndBoxPromptGenerator(1, 4, dilation_strength=8)\n\n# Precompute the bounding boxes for the given segmentation\nbounding_boxes, _ = util.get_centers_and_bounding_boxes(segmentation)\n\n# generate point prompts for the objects with ids 1, 2 and 3\nseg_ids = (1, 2, 3)\nobject_mask = np.stack([segmentation == seg_id for seg_id in seg_ids])[:, None]\nthis_bounding_boxes = [bounding_boxes[seg_id] for seg_id in seg_ids]\npoint_coords, point_labels, _, _ = prompt_generator(object_mask, this_bounding_boxes)\n
\nGenerate point prompts from an instance segmentation iteratively.
\n", "bases": "PromptGeneratorBase"}, {"fullname": "micro_sam.sam_annotator", "modulename": "micro_sam.sam_annotator", "kind": "module", "doc": "The interactive annotation tools.
\n"}, {"fullname": "micro_sam.sam_annotator.annotator_2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "kind": "module", "doc": "\n"}, {"fullname": "micro_sam.sam_annotator.annotator_2d.Annotator2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "qualname": "Annotator2d", "kind": "class", "doc": "Base class for micro_sam annotation plugins.
\n\nImplements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.
\n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_2d.Annotator2d.__init__", "modulename": "micro_sam.sam_annotator.annotator_2d", "qualname": "Annotator2d.__init__", "kind": "function", "doc": "Create the annotator GUI.
\n\nStart the 2d annotation tool for a given image.
\n\nNone
then the whole image is passed to Segment Anything.\n\n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tsegmentation_result: Optional[numpy.ndarray] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.annotator_3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "kind": "module", "doc": "\n"}, {"fullname": "micro_sam.sam_annotator.annotator_3d.Annotator3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "qualname": "Annotator3d", "kind": "class", "doc": "The napari viewer, only returned if
\nreturn_viewer=True
.
Base class for micro_sam annotation plugins.
\n\nImplements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.
\n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_3d.Annotator3d.__init__", "modulename": "micro_sam.sam_annotator.annotator_3d", "qualname": "Annotator3d.__init__", "kind": "function", "doc": "Create the annotator GUI.
\n\nStart the 3d annotation tool for a given image volume.
\n\nNone
then the whole image is passed to Segment Anything.\n\n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tsegmentation_result: Optional[numpy.ndarray] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "kind": "module", "doc": "\n"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking.AnnotatorTracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "qualname": "AnnotatorTracking", "kind": "class", "doc": "The napari viewer, only returned if
\nreturn_viewer=True
.
Base class for micro_sam annotation plugins.
\n\nImplements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.
\n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking.AnnotatorTracking.__init__", "modulename": "micro_sam.sam_annotator.annotator_tracking", "qualname": "AnnotatorTracking.__init__", "kind": "function", "doc": "Create the annotator GUI.
\n\nStart the tracking annotation tool fora given timeseries.
\n\nNone
then the whole image is passed to Segment Anything.\n\n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "kind": "module", "doc": "\n"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.image_series_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "image_series_annotator", "kind": "function", "doc": "The napari viewer, only returned if
\nreturn_viewer=True
.
Run the annotation tool for a series of images (supported for both 2d and 3d images).
\n\nNone
then the whole image is passed to Segment Anything.\n\n", "signature": "(\timages: Union[List[Union[os.PathLike, str]], List[numpy.ndarray]],\toutput_folder: str,\tmodel_type: str = 'vit_l',\tembedding_path: Optional[str] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tviewer: Optional[napari.viewer.Viewer] = None,\treturn_viewer: bool = False,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tis_volumetric: bool = False,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.image_folder_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "image_folder_annotator", "kind": "function", "doc": "The napari viewer, only returned if
\nreturn_viewer=True
.
Run the 2d annotation tool for a series of images in a folder.
\n\ninput_folder
.\nBy default all files will be loaded.micro_sam.sam_annotator.image_series_annotator
.\n\n", "signature": "(\tinput_folder: str,\toutput_folder: str,\tpattern: str = '*',\tviewer: Optional[napari.viewer.Viewer] = None,\treturn_viewer: bool = False,\t**kwargs) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator", "kind": "class", "doc": "The napari viewer, only returned if
\nreturn_viewer=True
.
QWidget(parent: typing.Optional[QWidget] = None, flags: Union[Qt.WindowFlags, Qt.WindowType] = Qt.WindowFlags())
\n", "bases": "micro_sam.sam_annotator._widgets._WidgetBase"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator.__init__", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator.__init__", "kind": "function", "doc": "\n", "signature": "(viewer: napari.viewer.Viewer, parent=None)"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator.run_button", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator.run_button", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.sam_annotator.training_ui", "modulename": "micro_sam.sam_annotator.training_ui", "kind": "module", "doc": "\n"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget", "kind": "class", "doc": "QWidget(parent: typing.Optional[QWidget] = None, flags: Union[Qt.WindowFlags, Qt.WindowType] = Qt.WindowFlags())
\n", "bases": "micro_sam.sam_annotator._widgets._WidgetBase"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget.__init__", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget.__init__", "kind": "function", "doc": "\n", "signature": "(parent=None)"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget.run_button", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget.run_button", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.sam_annotator.util", "modulename": "micro_sam.sam_annotator.util", "kind": "module", "doc": "\n"}, {"fullname": "micro_sam.sam_annotator.util.point_layer_to_prompts", "modulename": "micro_sam.sam_annotator.util", "qualname": "point_layer_to_prompts", "kind": "function", "doc": "Extract point prompts for SAM from a napari point layer.
\n\n\n\n", "signature": "(\tlayer: napari.layers.points.points.Points,\ti=None,\ttrack_id=None,\twith_stop_annotation=True) -> Optional[Tuple[numpy.ndarray, numpy.ndarray]]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.shape_layer_to_prompts", "modulename": "micro_sam.sam_annotator.util", "qualname": "shape_layer_to_prompts", "kind": "function", "doc": "The point coordinates for the prompts.\n The labels (positive or negative / 1 or 0) for the prompts.
\n
Extract prompts for SAM from a napari shape layer.
\n\nExtracts the bounding box for 'rectangle' shapes and the bounding box and corresponding mask\nfor 'ellipse' and 'polygon' shapes.
\n\n\n\n", "signature": "(\tlayer: napari.layers.shapes.shapes.Shapes,\tshape: Tuple[int, int],\ti=None,\ttrack_id=None) -> Tuple[List[numpy.ndarray], List[Optional[numpy.ndarray]]]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.prompt_layer_to_state", "modulename": "micro_sam.sam_annotator.util", "qualname": "prompt_layer_to_state", "kind": "function", "doc": "The box prompts.\n The mask prompts.
\n
Get the state of the track from a point layer for a given timeframe.
\n\nOnly relevant for annotator_tracking.
\n\n\n\n", "signature": "(prompt_layer: napari.layers.points.points.Points, i: int) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.prompt_layers_to_state", "modulename": "micro_sam.sam_annotator.util", "qualname": "prompt_layers_to_state", "kind": "function", "doc": "The state of this frame (either \"division\" or \"track\").
\n
Get the state of the track from a point layer and shape layer for a given timeframe.
\n\nOnly relevant for annotator_tracking.
\n\n\n\n", "signature": "(\tpoint_layer: napari.layers.points.points.Points,\tbox_layer: napari.layers.shapes.shapes.Shapes,\ti: int) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data", "modulename": "micro_sam.sample_data", "kind": "module", "doc": "The state of this frame (either \"division\" or \"track\").
\n
Sample microscopy data.
\n\nYou can change the download location for sample data and model weights\nby setting the environment variable: MICROSAM_CACHEDIR
\n\nBy default sample data is downloaded to a folder named 'micro_sam/sample_data'\ninside your default cache directory, eg:\n * Mac: ~/Library/Caches/
Download the sample images for the image series annotator.
\n\n\n\n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_image_series", "modulename": "micro_sam.sample_data", "qualname": "sample_data_image_series", "kind": "function", "doc": "The folder that contains the downloaded data.
\n
Provides image series example image to napari.
\n\nOpens as three separate image layers in napari (one per image in series).\nThe third image in the series has a different size and modality.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_wholeslide_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_wholeslide_example_data", "kind": "function", "doc": "Download the sample data for the 2d annotator.
\n\nThis downloads part of a whole-slide image from the NeurIPS Cell Segmentation Challenge.\nSee https://neurips22-cellseg.grand-challenge.org/ for details on the data.
\n\n\n\n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_wholeslide", "modulename": "micro_sam.sample_data", "qualname": "sample_data_wholeslide", "kind": "function", "doc": "The path of the downloaded image.
\n
Provides wholeslide 2d example image to napari.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_livecell_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_livecell_example_data", "kind": "function", "doc": "Download the sample data for the 2d annotator.
\n\nThis downloads a single image from the LiveCELL dataset.\nSee https://doi.org/10.1038/s41592-021-01249-6 for details on the data.
\n\n\n\n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_livecell", "modulename": "micro_sam.sample_data", "qualname": "sample_data_livecell", "kind": "function", "doc": "The path of the downloaded image.
\n
Provides livecell 2d example image to napari.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_hela_2d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_hela_2d_example_data", "kind": "function", "doc": "Download the sample data for the 2d annotator.
\n\nThis downloads a single image from the HeLa CTC dataset.
\n\n\n\n", "signature": "(save_directory: Union[str, os.PathLike]) -> Union[str, os.PathLike]:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_hela_2d", "modulename": "micro_sam.sample_data", "qualname": "sample_data_hela_2d", "kind": "function", "doc": "The path of the downloaded image.
\n
Provides HeLa 2d example image to napari.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_3d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_3d_example_data", "kind": "function", "doc": "Download the sample data for the 3d annotator.
\n\nThis downloads the Lucchi++ datasets from https://casser.io/connectomics/.\nIt is a dataset for mitochondria segmentation in EM.
\n\n\n\n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_3d", "modulename": "micro_sam.sample_data", "qualname": "sample_data_3d", "kind": "function", "doc": "The folder that contains the downloaded data.
\n
Provides Lucchi++ 3d example image to napari.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_tracking_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_tracking_example_data", "kind": "function", "doc": "Download the sample data for the tracking annotator.
\n\nThis data is the cell tracking challenge dataset DIC-C2DH-HeLa.\nCell tracking challenge webpage: http://data.celltrackingchallenge.net\nHeLa cells on a flat glass\nDr. G. van Cappellen. Erasmus Medical Center, Rotterdam, The Netherlands\nTraining dataset: http://data.celltrackingchallenge.net/training-datasets/DIC-C2DH-HeLa.zip (37 MB)\nChallenge dataset: http://data.celltrackingchallenge.net/challenge-datasets/DIC-C2DH-HeLa.zip (41 MB)
\n\n\n\n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_tracking", "modulename": "micro_sam.sample_data", "qualname": "sample_data_tracking", "kind": "function", "doc": "The folder that contains the downloaded data.
\n
Provides tracking example dataset to napari.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_tracking_segmentation_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_tracking_segmentation_data", "kind": "function", "doc": "Download groundtruth segmentation for the tracking example data.
\n\nThis downloads the groundtruth segmentation for the image data from fetch_tracking_example_data
.
\n\n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_segmentation", "modulename": "micro_sam.sample_data", "qualname": "sample_data_segmentation", "kind": "function", "doc": "The folder that contains the downloaded data.
\n
Provides segmentation example dataset to napari.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.synthetic_data", "modulename": "micro_sam.sample_data", "qualname": "synthetic_data", "kind": "function", "doc": "Create synthetic image data and segmentation for training.
\n", "signature": "(shape, seed=None):", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_nucleus_3d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_nucleus_3d_example_data", "kind": "function", "doc": "Download the sample data for 3d segmentation of nuclei.
\n\nThis data contains a small crop from a volume from the publication\n\"Efficient automatic 3D segmentation of cell nuclei for high-content screening\"\nhttps://doi.org/10.1186/s12859-022-04737-4
\n\n\n\n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.training", "modulename": "micro_sam.training", "kind": "module", "doc": "The path of the downloaded image.
\n
Functionality for training Segment Anything.
\n"}, {"fullname": "micro_sam.training.joint_sam_trainer", "modulename": "micro_sam.training.joint_sam_trainer", "kind": "module", "doc": "\n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer", "kind": "class", "doc": "Trainer class for training the Segment Anything model.
\n\nThis class is derived from torch_em.trainer.DefaultTrainer
.\nCheck out https://github.com/constantinpape/torch-em/blob/main/torch_em/trainer/default_trainer.py\nfor details on its usage and implementation.
micro_sam.training.util.ConvertToSamInputs
can be used here.n_sub_iteration
)Trainer class for training the Segment Anything model.
\n\nThis class is derived from torch_em.trainer.DefaultTrainer
.\nCheck out https://github.com/constantinpape/torch-em/blob/main/torch_em/trainer/default_trainer.py\nfor details on its usage and implementation.
micro_sam.training.util.ConvertToSamInputs
can be used here.n_sub_iteration
)Wrapper to make the SegmentAnything model trainable.
\n\nInitializes internal Module state, shared by both nn.Module and ScriptModule.
\n", "signature": "(\tsam: segment_anything.modeling.sam.Sam,\tdevice: Union[str, torch.device])"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.sam", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.sam", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.device", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.device", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.transform", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.transform", "kind": "variable", "doc": "\n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.preprocess", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.preprocess", "kind": "function", "doc": "Resize, normalize pixel values and pad to a square input.
\n\n\n\n", "signature": "(self, x: torch.Tensor) -> Tuple[torch.Tensor, Tuple[int, int]]:", "funcdef": "def"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.image_embeddings_oft", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.image_embeddings_oft", "kind": "function", "doc": "\n", "signature": "(self, batched_inputs):", "funcdef": "def"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.forward", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.forward", "kind": "function", "doc": "The resized, normalized and padded tensor.\n The shape of the image after resizing.
\n
Forward pass.
\n\n\n\n", "signature": "(\tself,\tbatched_inputs: List[Dict[str, Any]],\timage_embeddings: torch.Tensor,\tmultimask_output: bool = False) -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.training.training", "modulename": "micro_sam.training.training", "kind": "module", "doc": "\n"}, {"fullname": "micro_sam.training.training.FilePath", "modulename": "micro_sam.training.training", "qualname": "FilePath", "kind": "variable", "doc": "\n", "default_value": "typing.Union[str, os.PathLike]"}, {"fullname": "micro_sam.training.training.train_sam", "modulename": "micro_sam.training.training", "qualname": "train_sam", "kind": "function", "doc": "The predicted segmentation masks and iou values.
\n
Run training for a SAM model.
\n\nCreate a PyTorch Dataset for training a SAM model.
\n\n\n\n", "signature": "(\traw_paths: Union[List[Union[os.PathLike, str]], str, os.PathLike],\traw_key: Optional[str],\tlabel_paths: Union[List[Union[os.PathLike, str]], str, os.PathLike],\tlabel_key: Optional[str],\tpatch_shape: Tuple[int],\twith_segmentation_decoder: bool,\twith_channels: bool = False,\tsampler=None,\tn_samples: Optional[int] = None,\tis_train: bool = True,\t**kwargs) -> torch.utils.data.dataset.Dataset:", "funcdef": "def"}, {"fullname": "micro_sam.training.training.default_sam_loader", "modulename": "micro_sam.training.training", "qualname": "default_sam_loader", "kind": "function", "doc": "\n", "signature": "(**kwargs) -> torch.utils.data.dataloader.DataLoader:", "funcdef": "def"}, {"fullname": "micro_sam.training.training.CONFIGURATIONS", "modulename": "micro_sam.training.training", "qualname": "CONFIGURATIONS", "kind": "variable", "doc": "The dataset.
\n
Best training configurations for given hardware resources.
\n", "default_value": "{'Minimal': {'model_type': 'vit_t', 'n_objects_per_batch': 4, 'n_sub_iteration': 4}, 'CPU': {'model_type': 'vit_b', 'n_objects_per_batch': 10}, 'gtx1080': {'model_type': 'vit_t', 'n_objects_per_batch': 5}, 'rtx5000': {'model_type': 'vit_b', 'n_objects_per_batch': 10}, 'V100': {'model_type': 'vit_b'}, 'A100': {'model_type': 'vit_h'}}"}, {"fullname": "micro_sam.training.training.train_sam_for_configuration", "modulename": "micro_sam.training.training", "qualname": "train_sam_for_configuration", "kind": "function", "doc": "Run training for a SAM model with the configuration for a given hardware resource.
\n\nSelects the best training settings for the given configuration.\nThe available configurations are listed in CONFIGURATIONS
.
train_sam
.Identity transformation.
\n\nThis is a helper function to skip data normalization when finetuning SAM.\nData normalization is performed within the model and should thus be skipped as\na preprocessing step in training.
\n", "signature": "(x):", "funcdef": "def"}, {"fullname": "micro_sam.training.util.require_8bit", "modulename": "micro_sam.training.util", "qualname": "require_8bit", "kind": "function", "doc": "Transformation to require 8bit input data range (0-255).
\n", "signature": "(x):", "funcdef": "def"}, {"fullname": "micro_sam.training.util.get_trainable_sam_model", "modulename": "micro_sam.training.util", "qualname": "get_trainable_sam_model", "kind": "function", "doc": "Get the trainable sam model.
\n\ncheckpoint_path
.\n\n", "signature": "(\tmodel_type: str = 'vit_l',\tdevice: Union[str, torch.device, NoneType] = None,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\tfreeze: Optional[List[str]] = None,\treturn_state: bool = False) -> micro_sam.training.trainable_sam.TrainableSAM:", "funcdef": "def"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs", "kind": "class", "doc": "The trainable segment anything model.
\n
Convert outputs of data loader to the expected batched inputs of the SegmentAnything model.
\n\nNone
the prompts will not be resized.Helper functions for downloading Segment Anything models and predicting image embeddings.
\n"}, {"fullname": "micro_sam.util.get_cache_directory", "modulename": "micro_sam.util", "qualname": "get_cache_directory", "kind": "function", "doc": "Get micro-sam cache directory location.
\n\nUsers can set the MICROSAM_CACHEDIR environment variable for a custom cache directory.
\n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.microsam_cachedir", "modulename": "micro_sam.util", "qualname": "microsam_cachedir", "kind": "function", "doc": "Return the micro-sam cache directory.
\n\nReturns the top level cache directory for micro-sam models and sample data.
\n\nEvery time this function is called, we check for any user updates made to\nthe MICROSAM_CACHEDIR os environment variable since the last time.
\n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.models", "modulename": "micro_sam.util", "qualname": "models", "kind": "function", "doc": "Return the segmentation models registry.
\n\nWe recreate the model registry every time this function is called,\nso any user changes to the default micro-sam cache directory location\nare respected.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.util.get_device", "modulename": "micro_sam.util", "qualname": "get_device", "kind": "function", "doc": "Get the torch device.
\n\nIf no device is passed the default device for your system is used.\nElse it will be checked if the device you have passed is supported.
\n\n\n\n", "signature": "(\tdevice: Union[str, torch.device, NoneType] = None) -> Union[str, torch.device]:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_sam_model", "modulename": "micro_sam.util", "qualname": "get_sam_model", "kind": "function", "doc": "The device.
\n
Get the SegmentAnything Predictor.
\n\nThis function will download the required model or load it from the cached weight file.\nThis location of the cache can be changed by setting the environment variable: MICROSAM_CACHEDIR.\nThe name of the requested model can be set via model_type
.\nSee https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models\nfor an overview of the available models
Alternatively this function can also load a model from weights stored in a local filepath.\nThe corresponding file path is given via checkpoint_path
. In this case model_type
\nmust be given as the matching encoder architecture, e.g. \"vit_b\" if the weights are for\na SAM model with vit_b encoder.
By default the models are downloaded to a folder named 'micro_sam/models'\ninside your default cache directory, eg:
\n\nget_model_names
.model_type
. If given, model_type
must match the architecture\ncorresponding to the weight file. E.g. if you use weights for SAM with vit_b encoder\nthen model_type
must be given as \"vit_b\".\n\n", "signature": "(\tmodel_type: str = 'vit_l',\tdevice: Union[str, torch.device, NoneType] = None,\tcheckpoint_path: Union[str, os.PathLike, NoneType] = None,\treturn_sam: bool = False,\treturn_state: bool = False) -> mobile_sam.predictor.SamPredictor:", "funcdef": "def"}, {"fullname": "micro_sam.util.export_custom_sam_model", "modulename": "micro_sam.util", "qualname": "export_custom_sam_model", "kind": "function", "doc": "The segment anything predictor.
\n
Export a finetuned segment anything model to the standard model format.
\n\nThe exported model can be used by the interactive annotation tools in micro_sam.annotator
.
Compute the image embeddings (output of the encoder) for the input.
\n\nIf 'save_path' is given the embeddings will be loaded/saved in a zarr container.
\n\n\n\n", "signature": "(\tpredictor: mobile_sam.predictor.SamPredictor,\tinput_: numpy.ndarray,\tsave_path: Union[str, os.PathLike, NoneType] = None,\tlazy_loading: bool = False,\tndim: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = True,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.util.set_precomputed", "modulename": "micro_sam.util", "qualname": "set_precomputed", "kind": "function", "doc": "The image embeddings.
\n
Set the precomputed image embeddings for a predictor.
\n\nprecompute_image_embeddings
.image
has three spatial dimensions\nor a time dimension and two spatial dimensions.\n\n", "signature": "(\tpredictor: mobile_sam.predictor.SamPredictor,\timage_embeddings: Dict[str, Any],\ti: Optional[int] = None,\ttile_id: Optional[int] = None) -> mobile_sam.predictor.SamPredictor:", "funcdef": "def"}, {"fullname": "micro_sam.util.compute_iou", "modulename": "micro_sam.util", "qualname": "compute_iou", "kind": "function", "doc": "The predictor with set features.
\n
Compute the intersection over union of two masks.
\n\n\n\n", "signature": "(mask1: numpy.ndarray, mask2: numpy.ndarray) -> float:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_centers_and_bounding_boxes", "modulename": "micro_sam.util", "qualname": "get_centers_and_bounding_boxes", "kind": "function", "doc": "The intersection over union of the two masks.
\n
Returns the center coordinates of the foreground instances in the ground-truth.
\n\n\n\n", "signature": "(\tsegmentation: numpy.ndarray,\tmode: str = 'v') -> Tuple[Dict[int, numpy.ndarray], Dict[int, tuple]]:", "funcdef": "def"}, {"fullname": "micro_sam.util.load_image_data", "modulename": "micro_sam.util", "qualname": "load_image_data", "kind": "function", "doc": "A dictionary that maps object ids to the corresponding centroid.\n A dictionary that maps object_ids to the corresponding bounding box.
\n
Helper function to load image data from file.
\n\n\n\n", "signature": "(\tpath: str,\tkey: Optional[str] = None,\tlazy_loading: bool = False) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.util.segmentation_to_one_hot", "modulename": "micro_sam.util", "qualname": "segmentation_to_one_hot", "kind": "function", "doc": "The image data.
\n
Convert the segmentation to one-hot encoded masks.
\n\n\n\n", "signature": "(\tsegmentation: numpy.ndarray,\tsegmentation_ids: Optional[numpy.ndarray] = None) -> torch.Tensor:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_block_shape", "modulename": "micro_sam.util", "qualname": "get_block_shape", "kind": "function", "doc": "The one-hot encoded masks.
\n
Get a suitable block shape for chunking a given shape.
\n\nThe primary use for this is determining chunk sizes for\nzarr arrays or block shapes for parallelization.
\n\n\n\n", "signature": "(shape: Tuple[int]) -> Tuple[int]:", "funcdef": "def"}, {"fullname": "micro_sam.visualization", "modulename": "micro_sam.visualization", "kind": "module", "doc": "The block shape.
\n
Functionality for visualizing image embeddings.
\n"}, {"fullname": "micro_sam.visualization.compute_pca", "modulename": "micro_sam.visualization", "qualname": "compute_pca", "kind": "function", "doc": "Compute the pca projection of the embeddings to visualize them as RGB image.
\n\n\n\n", "signature": "(embeddings: numpy.ndarray) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.visualization.project_embeddings_for_visualization", "modulename": "micro_sam.visualization", "qualname": "project_embeddings_for_visualization", "kind": "function", "doc": "PCA of the embeddings, mapped to the pixels.
\n
Project image embeddings to pixel-wise PCA.
\n\n\n\n", "signature": "(\timage_embeddings: Dict[str, Any]) -> Tuple[numpy.ndarray, Tuple[float, ...]]:", "funcdef": "def"}]; // mirrored in build-search-index.js (part 1) // Also split on html tags. this is a cheap heuristic, but good enough.The PCA of the embeddings.\n The scale factor for resizing to the original image size.
\n