From 12edc0e91125ec53ade24dc348d57aec93c58288 Mon Sep 17 00:00:00 2001 From: Constantin Pape Date: Thu, 25 Apr 2024 11:32:58 +0200 Subject: [PATCH] Update docs --- micro_sam.html | 41 +- micro_sam/__version__.html | 2 +- micro_sam/evaluation.html | 29 +- micro_sam/evaluation/evaluation.html | 370 +- micro_sam/evaluation/inference.html | 1609 ++--- .../evaluation/instance_segmentation.html | 434 +- micro_sam/evaluation/livecell.html | 1292 ++-- micro_sam/evaluation/model_comparison.html | 952 +-- micro_sam/inference.html | 543 +- micro_sam/instance_segmentation.html | 5201 ++++++++++------- micro_sam/multi_dimensional_segmentation.html | 1178 ++-- micro_sam/precompute_state.html | 760 ++- micro_sam/prompt_based_segmentation.html | 1434 ++--- micro_sam/prompt_generators.html | 4 +- micro_sam/sam_annotator.html | 19 +- micro_sam/sam_annotator/_state.html | 812 ++- micro_sam/sam_annotator/_widgets.html | 4619 ++++++++++++++- micro_sam/sam_annotator/annotator_2d.html | 988 ++-- micro_sam/sam_annotator/annotator_3d.html | 1176 ++-- .../sam_annotator/annotator_tracking.html | 1589 ++--- .../sam_annotator/image_series_annotator.html | 1517 ++++- micro_sam/sam_annotator/util.html | 1516 +++-- micro_sam/sample_data.html | 16 +- micro_sam/training.html | 2 + micro_sam/training/util.html | 922 +-- micro_sam/util.html | 2791 ++++----- search.js | 2 +- 27 files changed, 19278 insertions(+), 10540 deletions(-) diff --git a/micro_sam.html b/micro_sam.html index 1cd1da27..d2f6c6be 100644 --- a/micro_sam.html +++ b/micro_sam.html @@ -79,6 +79,7 @@

Contents

Submodules

An overview of the functionality of the different tools:

@@ -866,7 +867,7 @@

Annotation Tools

Yes - For multiple objects at a time + Interactive segmentation for multiple objects at a time Yes No No @@ -886,13 +887,13 @@

Annotation Tools

Automatic segmentation Yes - Yes (on dev) + Yes No -

The functionality for the image_series_annotator is not listed because it is identical with the functionality of the annotator_2d.

+

The functionality for image_series_annotator is not listed because it is identical with the functionality of annotator_2d.

Each tool implements the follwing core logic:

@@ -901,20 +902,28 @@

Annotation Tools

  • Interactive (and automatic) segmentation functionality is implemented by a UI based on napari and magicgui functionality.
  • -

    Each tool has two different entry points:

    +

    Each tool has three different entry points:

    -

    The tools are implemented their own submodules, e.g. micro_sam.sam_annotator.annotator_2d with shared functionality implemented in micro_sam.sam_annotator.util. The function micro_sam.sam_annotator.annotator_2d.annotator_2d_plugin implements the plugin entry point, using the magicgui.magic_factory decorator. micro_sam.sam_annotator.annotator_2d.annotator_2d implements the CLI entry point; it calls the annotator_2d_plugin function internally. -The image embeddings are computed by the embedding widget (@GenevieveBuckley: will need to be implemented in your PR), which takes the image data from an image layer. -In case of the plugin entry point this image layer is created by the user (by loading an image into napari), and the user can then select in the embedding widget which layer to use for embedding computation. -In case of CLI the image data is specified via the -i parameter, the layer is created for that image and the embeddings are computed for it automatically. +

    Each tool is implemented in its own submodule, e.g. micro_sam.sam_annotator.annotator_2d. +The napari plugin is implemented by a class, e.g. micro_sam.sam_annotator.annotator_2d:Annotator2d, inheriting from micro_sam.sam_annotator._annotator._AnnotatorBase. This class implements the core logic for the plugins. +The concrete annotation tools are instantiated by passing widgets from micro_sam.sam_annotator._widgets to it, +which implement the interactive segmentation in 2d, 3d etc. +These plugins are designed so that image embeddings can be computed for user-specified image layers in napari.

    + +

    The function and CLI entry points are implemented by micro_sam.sam_annotator.annotator_2d:annotator_2d (and corresponding functions for the other annotation tools). They are called with image data, precompute the embeddings for that image and start a napari viewer with this image and the annotation plugin.

    + +

    \n\n\n

    The installers are still experimental and not fully tested. Mac is not supported yet, but we are working on also providing an installer for it.

    \n\n

    If you encounter problems with them then please consider installing micro_sam via mamba instead.

    \n\n

    Linux Installer:

    \n\n

    To use the installer:

    \n\n\n\n

    \n\n

    Windows Installer:

    \n\n\n\n

    Annotation Tools

    \n\n

    micro_sam provides applications for fast interactive 2d segmentation, 3d segmentation and tracking.\nSee an example for interactive cell segmentation in phase-contrast microscopy (left), interactive segmentation\nof mitochondria in volume EM (middle) and interactive tracking of cells (right).

    \n\n

    \n\n

    \n\n

    The annotation tools can be started from the micro_sam GUI, the command line or from python scripts. The micro_sam GUI can be started by

    \n\n
    $ micro_sam.annotator\n
    \n\n

    They are built using napari and magicgui to provide the viewer and user interface.\nIf you are not familiar with napari yet, start here.\nThe micro_sam tools use the point layer, shape layer and label layer.

    \n\n

    The annotation tools are explained in detail below. In addition to the documentation here we also provide video tutorials.

    \n\n

    Starting via GUI

    \n\n

    The annotation toools can be started from a central GUI, which can be started with the command $ micro_sam.annotator or using the executable from an installer.

    \n\n

    In the GUI you can select with of the four annotation tools you want to use:\n

    \n\n

    And after selecting them a new window will open where you can select the input file path and other optional parameter. Then click the top button to start the tool. Note: If you are not starting the annotation tool with a path to pre-computed embeddings then it can take several minutes to open napari after pressing the button because the embeddings are being computed.

    \n\n

    Changes in version 0.3:

    \n\n

    We have made two changes in version 0.3 that are not reflected in the documentation below yet:

    \n\n\n\n

    Annotator 2D

    \n\n

    The 2d annotator can be started by

    \n\n\n\n

    The user interface of the 2d annotator looks like this:

    \n\n

    \n\n

    It contains the following elements:

    \n\n
      \n
    1. The napari layers for the image, segmentations and prompts:\n
        \n
      • box_prompts: shape layer that is used to provide box prompts to SegmentAnything.
      • \n
      • prompts: point layer that is used to provide prompts to SegmentAnything. Positive prompts (green points) for marking the object you want to segment, negative prompts (red points) for marking the outside of the object.
      • \n
      • current_object: label layer that contains the object you're currently segmenting.
      • \n
      • committed_objects: label layer with the objects that have already been segmented.
      • \n
      • auto_segmentation: label layer results from using SegmentAnything for automatic instance segmentation.
      • \n
      • raw: image layer that shows the image data.
      • \n
    2. \n
    3. The prompt menu for changing the currently selected point from positive to negative and vice versa. This can also be done by pressing t.
    4. \n
    5. The menu for automatic segmentation. Pressing Segment All Objects will run automatic segmentation. The results will be displayed in the auto_segmentation layer. Change the parameters pred iou thresh and stability score thresh to control how many objects are segmented.
    6. \n
    7. The menu for interactive segmentation. Pressing Segment Object (or s) will run segmentation for the current prompts. The result is displayed in current_object
    8. \n
    9. The menu for commiting the segmentation. When pressing Commit (or c) the result from the selected layer (either current_object or auto_segmentation) will be transferred from the respective layer to committed_objects.
    10. \n
    11. The menu for clearing the current annotations. Pressing Clear Annotations (or shift c) will clear the current annotations and the current segmentation.
    12. \n
    \n\n

    Note that point prompts and box prompts can be combined. When you're using point prompts you can only segment one object at a time. With box prompts you can segment several objects at once.

    \n\n

    Check out this video for a tutorial for the 2d annotation tool.

    \n\n

    We also provide the image series annotator, which can be used for running the 2d annotator for several images in a folder. You can start by clicking Image series annotator in the GUI, running micro_sam.image_series_annotator in the command line or from a python script.

    \n\n

    Annotator 3D

    \n\n

    The 3d annotator can be started by

    \n\n\n\n

    The user interface of the 3d annotator looks like this:

    \n\n

    \n\n

    Most elements are the same as in the 2d annotator:

    \n\n
      \n
    1. The napari layers that contain the image, segmentation and prompts. Same as for the 2d annotator but without the auto_segmentation layer.
    2. \n
    3. The prompt menu.
    4. \n
    5. The menu for interactive segmentation.
    6. \n
    7. The 3d segmentation menu. Pressing Segment All Slices (or Shift-S) will extend the segmentation for the current object across the volume.
    8. \n
    9. The menu for committing the segmentation.
    10. \n
    11. The menu for clearing the current annotations.
    12. \n
    \n\n

    Note that you can only segment one object at a time with the 3d annotator.

    \n\n

    Check out this video for a tutorial for the 3d annotation tool.

    \n\n

    Annotator Tracking

    \n\n

    The tracking annotator can be started by

    \n\n\n\n

    The user interface of the tracking annotator looks like this:

    \n\n

    \n\n

    Most elements are the same as in the 2d annotator:

    \n\n
      \n
    1. The napari layers that contain the image, segmentation and prompts. Same as for the 2d segmentation app but without the auto_segmentation layer, current_tracks and committed_tracks are the equivalent of current_object and committed_objects.
    2. \n
    3. The prompt menu.
    4. \n
    5. The menu with tracking settings: track_state is used to indicate that the object you are tracking is dividing in the current frame. track_id is used to select which of the tracks after division you are following.
    6. \n
    7. The menu for interactive segmentation.
    8. \n
    9. The tracking menu. Press Track Object (or Shift-S) to segment the current object across time.
    10. \n
    11. The menu for committing the current tracking result.
    12. \n
    13. The menu for clearing the current annotations.
    14. \n
    \n\n

    Note that the tracking annotator only supports 2d image data, volumetric data is not supported.

    \n\n

    Check out this video for a tutorial for how to use the tracking annotation tool.

    \n\n

    Tips & Tricks

    \n\n\n\n

    Known limitations

    \n\n\n\n

    How to use the Python Library

    \n\n

    The python library can be imported via

    \n\n
    \n
    import micro_sam\n
    \n
    \n\n

    The library

    \n\n\n\n

    You can import these sub-modules via

    \n\n
    \n
    import micro_sam.prompt_based_segmentation\nimport micro_sam.instance_segmentation\n# etc.\n
    \n
    \n\n

    This functionality is used to implement the interactive annotation tools and can also be used as a standalone python library.\nSome preliminary examples for how to use the python library can be found here. Check out the Submodules documentation for more details.

    \n\n

    Training your own model

    \n\n

    We reimplement the training logic described in the Segment Anything publication to enable finetuning on custom data.\nWe use this functionality to provide the finetuned microscopy models and it can also be used to finetune models on your own data.\nIn fact the best results can be expected when finetuning on your own data, and we found that it does not require much annotated training data to get siginficant improvements in model performance.\nSo a good strategy is to annotate a few images with one of the provided models using one of the interactive annotation tools and, if the annotation is not working as good as expected yet, finetune on the annotated data.\n

    \n\n

    The training logic is implemented in micro_sam.training and is based on torch-em. Please check out examples/finetuning to see how you can finetune on your own data with it. The script finetune_hela.py contains an example for finetuning on a small custom microscopy dataset and use_finetuned_model.py shows how this model can then be used in the interactive annotation tools.

    \n\n

    Since release v0.4.0 we also support training an additional decoder for automatic instance segmentation. This yields better results than the automatic mask generation of segment anything and is significantly faster.\nYou can enable training of the decoder by setting train_instance_segmentation = True here.\nThe script instance_segmentation_with_finetuned_model.py shows how to use it for automatic instance segmentation.\nWe will fully integrate this functionality with the annotation tool in the next release.

    \n\n

    More advanced examples, including quantitative and qualitative evaluation, of finetuned models can be found in finetuning, which contains the code for training and evaluating our microscopy models.

    \n\n

    Finetuned models

    \n\n

    In addition to the original Segment anything models, we provide models that finetuned on microscopy data using the functionality from micro_sam.training.\nThe models are hosted on zenodo. We currently offer the following models:

    \n\n\n\n

    See the two figures below of the improvements through the finetuned model for LM and EM data.

    \n\n

    \n\n

    \n\n

    You can select which of the models is used in the annotation tools by selecting the corresponding name from the Model Type menu:

    \n\n

    \n\n

    To use a specific model in the python library you need to pass the corresponding name as value to the model_type parameter exposed by all relevant functions.\nSee for example the 2d annotator example where use_finetuned_model can be set to True to use the vit_b_lm model.

    \n\n

    Note that we are still working on improving these models and may update them from time to time. All older models will stay available for download on zenodo, see model sources below

    \n\n

    Which model should I choose?

    \n\n

    As a rule of thumb:

    \n\n\n\n

    See also the figures above for examples where the finetuned models work better than the vanilla models.\nCurrently the model vit_h is used by default.

    \n\n

    We are working on further improving these models and adding new models for other biomedical imaging domains.

    \n\n

    Model Sources

    \n\n

    Here is an overview of all finetuned models we have released to zenodo so far:

    \n\n\n\n

    Some of these models contain multiple versions.

    \n\n

    How to contribute

    \n\n\n\n

    Discuss your ideas

    \n\n

    We welcome new contributions!

    \n\n

    First, discuss your idea by opening a new issue in micro-sam.

    \n\n

    This allows you to ask questions, and have the current developers make suggestions about the best way to implement your ideas.

    \n\n

    You may also find it helpful to look at this developer guide, which explains the organization of the micro-sam code.

    \n\n

    Clone the repository

    \n\n

    We use git for version control.

    \n\n

    Clone the repository, and checkout the development branch:

    \n\n
    git clone https://github.com/computational-cell-analytics/micro-sam.git\ncd micro-sam\ngit checkout dev\n
    \n\n

    Create your development environment

    \n\n

    We use conda to manage our environments. If you don't have this already, install miniconda or mamba to get started.

    \n\n

    Now you can create the environment, install user and develoepr dependencies, and micro-sam as an editable installation:

    \n\n
    conda env create environment-gpu.yml\nconda activate sam\npython -m pip install requirements-dev.txt\npython -m pip install -e .\n
    \n\n

    Make your changes

    \n\n

    Now it's time to make your code changes.

    \n\n

    Typically, changes are made branching off from the development branch. Checkout dev and then create a new branch to work on your changes.

    \n\n
    git checkout dev\ngit checkout -b my-new-feature\n
    \n\n

    We use google style python docstrings to create documentation for all new code.

    \n\n

    You may also find it helpful to look at this developer guide, which explains the organization of the micro-sam code.

    \n\n

    Testing

    \n\n

    Run the tests

    \n\n

    The tests for micro-sam are run with pytest

    \n\n

    To run the tests:

    \n\n
    pytest\n
    \n\n

    Writing your own tests

    \n\n

    If you have written new code, you will need to write tests to go with it.

    \n\n

    Unit tests

    \n\n

    Unit tests are the preferred style of tests for user contributions. Unit tests check small, isolated parts of the code for correctness. If your code is too complicated to write unit tests easily, you may need to consider breaking it up into smaller functions that are easier to test.

    \n\n

    Tests involving napari

    \n\n

    In cases where tests must use the napari viewer, these tips might be helpful (in particular, the make_napari_viewer_proxy fixture).

    \n\n

    These kinds of tests should be used only in limited circumstances. Developers are advised to prefer smaller unit tests, and avoid integration tests wherever possible.

    \n\n

    Code coverage

    \n\n

    Pytest uses the pytest-cov plugin to automatically determine which lines of code are covered by tests.

    \n\n

    A short summary report is printed to the terminal output whenever you run pytest. The full results are also automatically written to a file named coverage.xml.

    \n\n

    The Coverage Gutters VSCode extension is useful for visualizing which parts of the code need better test coverage. PyCharm professional has a similar feature, and you may be able to find similar tools for your preferred editor.

    \n\n

    We also use codecov.io to display the code coverage results from our Github Actions continuous integration.

    \n\n

    Open a pull request

    \n\n

    Once you've made changes to the code and written some tests to go with it, you are ready to open a pull request. You can mark your pull request as a draft if you are still working on it, and still get the benefit of discussing the best approach with maintainers.

    \n\n

    Remember that typically changes to micro-sam are made branching off from the development branch. So, you will need to open your pull request to merge back into the dev branch like this.

    \n\n

    Optional: Build the documentation

    \n\n

    We use pdoc to build the documentation.

    \n\n

    To build the documentation locally, run this command:

    \n\n
    python build_doc.py\n
    \n\n

    This will start a local server and display the HTML documentation. Any changes you make to the documentation will be updated in real time (you may need to refresh your browser to see the changes).

    \n\n

    If you want to save the HTML files, append --out to the command, like this:

    \n\n
    python build_doc.py --out\n
    \n\n

    This will save the HTML files into a new directory named tmp.

    \n\n

    You can add content to the documentation in two ways:

    \n\n
      \n
    1. By adding or updating google style python docstrings in the micro-sam code.\n
        \n
      • pdoc will automatically find and include docstrings in the documentation.
      • \n
    2. \n
    3. By adding or editing markdown files in the micro-sam doc directory.\n
        \n
      • If you add a new markdown file to the documentation, you must tell pdoc that it exists by adding a line to the micro_sam/__init__.py module docstring (eg: .. include:: ../doc/my_amazing_new_docs_page.md). Otherwise it will not be included in the final documentation build!
      • \n
    4. \n
    \n\n

    Optional: Benchmark performance

    \n\n

    There are a number of options you can use to benchmark performance, and identify problems like slow run times or high memory use in micro-sam.

    \n\n\n\n

    Run the benchmark script

    \n\n

    There is a performance benchmark script available in the micro-sam repository at development/benchmark.py.

    \n\n

    To run the benchmark script:

    \n\n
    python development/benchmark.py --model_type vit_t --device cpu`\n
    \n\n

    For more details about the user input arguments for the micro-sam benchmark script, see the help:

    \n\n
    python development/benchmark.py --help\n
    \n\n

    Line profiling

    \n\n

    For more detailed line by line performance results, we can use line-profiler.

    \n\n
    \n

    line_profiler is a module for doing line-by-line profiling of functions. kernprof is a convenient script for running either line_profiler or the Python standard library's cProfile or profile modules, depending on what is available.

    \n
    \n\n

    To do line-by-line profiling:

    \n\n
      \n
    1. Ensure you have line profiler installed: python -m pip install line_profiler
    2. \n
    3. Add @profile decorator to any function in the call stack
    4. \n
    5. Run kernprof -lv benchmark.py --model_type vit_t --device cpu
    6. \n
    \n\n

    For more details about how to use line-profiler and kernprof, see the documentation.

    \n\n

    For more details about the user input arguments for the micro-sam benchmark script, see the help:

    \n\n
    python development/benchmark.py --help\n
    \n\n

    Snakeviz visualization

    \n\n

    For more detailed visualizations of profiling results, we use snakeviz.

    \n\n
    \n

    SnakeViz is a browser based graphical viewer for the output of Python\u2019s cProfile module

    \n
    \n\n
      \n
    1. Ensure you have snakeviz installed: python -m pip install snakeviz
    2. \n
    3. Generate profile file: python -m cProfile -o program.prof benchmark.py --model_type vit_h --device cpu
    4. \n
    5. Visualize profile file: snakeviz program.prof
    6. \n
    \n\n

    For more details about how to use snakeviz, see the documentation.

    \n\n

    Memory profiling with memray

    \n\n

    If you need to investigate memory use specifically, we use memray.

    \n\n
    \n

    Memray is a memory profiler for Python. It can track memory allocations in Python code, in native extension modules, and in the Python interpreter itself. It can generate several different types of reports to help you analyze the captured memory usage data. While commonly used as a CLI tool, it can also be used as a library to perform more fine-grained profiling tasks.

    \n
    \n\n

    For more details about how to use memray, see the documentation.

    \n\n

    For Developers

    \n\n

    This software consists of four different python (sub-)modules:

    \n\n\n\n

    Annotation Tools

    \n\n

    The annotation tools are currently implemented as stand-alone napari applications. We are in the process of implementing them as napari plugins instead (see https://github.com/computational-cell-analytics/micro-sam/issues/167 for details), and the descriptions here refer to the planned architecture for the plugins.

    \n\n

    There are four annotation tools:

    \n\n\n\n

    An overview of the functionality of the different tools:

    \n\n\n\n\n \n \n \n \n\n\n\n\n \n \n \n \n\n\n \n \n \n \n\n\n \n \n \n \n\n\n \n \n \n \n\n\n \n \n \n \n\n\n
    Functionalityannotator_2dannotator_3dannotator_tracking
    Interactive segmentationYesYesYes
    For multiple objects at a timeYesNoNo
    Interactive 3d segmentation via projectionNoYesYes
    Support for dividing objectsNoNoYes
    Automatic segmentationYesYes (on dev)No
    \n\n

    The functionality for the image_series_annotator is not listed because it is identical with the functionality of the annotator_2d.

    \n\n

    Each tool implements the follwing core logic:

    \n\n
      \n
    1. The image embeddings (prediction from SAM image encoder) are pre-computed for the input data (2d image, image volume or timeseries). These embeddings can be cached to a zarr file.
    2. \n
    3. Interactive (and automatic) segmentation functionality is implemented by a UI based on napari and magicgui functionality.
    4. \n
    \n\n

    Each tool has two different entry points:

    \n\n\n\n

    The tools are implemented their own submodules, e.g. micro_sam.sam_annotator.annotator_2d with shared functionality implemented in micro_sam.sam_annotator.util. The function micro_sam.sam_annotator.annotator_2d.annotator_2d_plugin implements the plugin entry point, using the magicgui.magic_factory decorator. micro_sam.sam_annotator.annotator_2d.annotator_2d implements the CLI entry point; it calls the annotator_2d_plugin function internally.\nThe image embeddings are computed by the embedding widget (@GenevieveBuckley: will need to be implemented in your PR), which takes the image data from an image layer.\nIn case of the plugin entry point this image layer is created by the user (by loading an image into napari), and the user can then select in the embedding widget which layer to use for embedding computation. \nIn case of CLI the image data is specified via the -i parameter, the layer is created for that image and the embeddings are computed for it automatically.\nThe same overall design holds true for the other plugins. The flow chart below shows a flow chart with a simplified overview of the design of the 2d annotation tool. Rounded squares represent functions or the corresponding widget and squares napari layers or other data, orange represents the plugin enty point, cyan CLI. Arrows that do not have a label correspond to a simple input/output relation.

    \n\n

    \"annotator

    \n\n

    \n\n

    Using micro_sam on BAND

    \n\n

    BAND is a service offered by EMBL Heidelberg that gives access to a virtual desktop for image analysis tasks. It is free to use and micro_sam is installed there.\nIn order to use BAND and start micro_sam on it follow these steps:

    \n\n

    Start BAND

    \n\n\n\n

    \"image\"

    \n\n

    Start micro_sam in BAND

    \n\n\n\n

    Transfering data to BAND

    \n\n

    To copy data to and from BAND you can use any cloud storage, e.g. ownCloud, dropbox or google drive. For this, it's important to note that copy and paste, which you may need for accessing links on BAND, works a bit different in BAND:

    \n\n\n\n

    The video below shows how to copy over a link from owncloud and then download the data on BAND using copy and paste:

    \n\n

    https://github.com/computational-cell-analytics/micro-sam/assets/4263537/825bf86e-017e-41fc-9e42-995d21203287

    \n"}, {"fullname": "micro_sam.evaluation", "modulename": "micro_sam.evaluation", "kind": "module", "doc": "

    Functionality for evaluating Segment Anything models on microscopy data.

    \n"}, {"fullname": "micro_sam.evaluation.evaluation", "modulename": "micro_sam.evaluation.evaluation", "kind": "module", "doc": "

    Evaluation functionality for segmentation predictions from micro_sam.evaluation.automatic_mask_generation\nand micro_sam.evaluation.inference.

    \n"}, {"fullname": "micro_sam.evaluation.evaluation.run_evaluation", "modulename": "micro_sam.evaluation.evaluation", "qualname": "run_evaluation", "kind": "function", "doc": "

    Run evaluation for instance segmentation predictions.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    A DataFrame that contains the evaluation results.

    \n
    \n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprediction_paths: List[Union[str, os.PathLike]],\tsave_path: Union[str, os.PathLike, NoneType] = None,\tverbose: bool = True) -> pandas.core.frame.DataFrame:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments", "modulename": "micro_sam.evaluation.experiments", "kind": "module", "doc": "

    Predefined experiment settings for experiments with different prompt strategies.

    \n"}, {"fullname": "micro_sam.evaluation.experiments.ExperimentSetting", "modulename": "micro_sam.evaluation.experiments", "qualname": "ExperimentSetting", "kind": "variable", "doc": "

    \n", "default_value": "typing.Dict"}, {"fullname": "micro_sam.evaluation.experiments.full_experiment_settings", "modulename": "micro_sam.evaluation.experiments", "qualname": "full_experiment_settings", "kind": "function", "doc": "

    The full experiment settings.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The list of experiment settings.

    \n
    \n", "signature": "(\tuse_boxes: bool = False,\tpositive_range: Optional[List[int]] = None,\tnegative_range: Optional[List[int]] = None) -> List[Dict]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments.default_experiment_settings", "modulename": "micro_sam.evaluation.experiments", "qualname": "default_experiment_settings", "kind": "function", "doc": "

    The three default experiment settings.

    \n\n

    For the default experiments we use a single positive prompt,\ntwo positive and four negative prompts and box prompts.

    \n\n
    Returns:
    \n\n
    \n

    The list of experiment settings.

    \n
    \n", "signature": "() -> List[Dict]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments.get_experiment_setting_name", "modulename": "micro_sam.evaluation.experiments", "qualname": "get_experiment_setting_name", "kind": "function", "doc": "

    Get the name for the given experiment setting.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The name for this experiment setting.

    \n
    \n", "signature": "(setting: Dict) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference", "modulename": "micro_sam.evaluation.inference", "kind": "module", "doc": "

    Inference with Segment Anything models and different prompt strategies.

    \n"}, {"fullname": "micro_sam.evaluation.inference.get_predictor", "modulename": "micro_sam.evaluation.inference", "qualname": "get_predictor", "kind": "function", "doc": "

    Get the segment anything predictor from an exported or custom checkpoint.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The segment anything predictor.

    \n
    \n", "signature": "(\tcheckpoint_path: Union[str, os.PathLike],\tmodel_type: str,\tdevice: Union[str, torch.device, NoneType] = None,\treturn_state: bool = False,\tis_custom_model: Optional[bool] = None) -> segment_anything.predictor.SamPredictor:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.precompute_all_embeddings", "modulename": "micro_sam.evaluation.inference", "qualname": "precompute_all_embeddings", "kind": "function", "doc": "

    Precompute all image embeddings.

    \n\n

    To enable running different inference tasks in parallel afterwards.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.precompute_all_prompts", "modulename": "micro_sam.evaluation.inference", "qualname": "precompute_all_prompts", "kind": "function", "doc": "

    Precompute all point prompts.

    \n\n

    To enable running different inference tasks in parallel afterwards.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprompt_save_dir: Union[str, os.PathLike],\tprompt_settings: List[Dict[str, Any]]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_inference_with_prompts", "modulename": "micro_sam.evaluation.inference", "qualname": "run_inference_with_prompts", "kind": "function", "doc": "

    Run segment anything inference for multiple images using prompts derived from groundtruth.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage_paths: List[Union[str, os.PathLike]],\tgt_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tuse_points: bool,\tuse_boxes: bool,\tn_positives: int,\tn_negatives: int,\tdilation: int = 5,\tprompt_save_dir: Union[str, os.PathLike, NoneType] = None,\tbatch_size: int = 512) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_inference_with_iterative_prompting", "modulename": "micro_sam.evaluation.inference", "qualname": "run_inference_with_iterative_prompting", "kind": "function", "doc": "

    Run segment anything inference for multiple images using prompts iteratively\n derived from model outputs and groundtruth

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage_paths: List[Union[str, os.PathLike]],\tgt_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tstart_with_box_prompt: bool,\tdilation: int = 5,\tbatch_size: int = 32,\tn_iterations: int = 8) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation", "modulename": "micro_sam.evaluation.instance_segmentation", "kind": "module", "doc": "

    Inference and evaluation for the automatic instance segmentation functionality.

    \n"}, {"fullname": "micro_sam.evaluation.instance_segmentation.default_grid_search_values_amg", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "default_grid_search_values_amg", "kind": "function", "doc": "

    Default grid-search parameter for AMG-based instance segmentation.

    \n\n

    Return grid search values for the two most important parameters:

    \n\n\n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The values for grid search.

    \n
    \n", "signature": "(\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None) -> Dict[str, List[float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.default_grid_search_values_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "default_grid_search_values_instance_segmentation_with_decoder", "kind": "function", "doc": "

    Default grid-search parameter for decoder-based instance segmentation.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The values for grid search.

    \n
    \n", "signature": "(\tcenter_distance_threshold_values: Optional[List[float]] = None,\tboundary_distance_threshold_values: Optional[List[float]] = None,\tdistance_smoothing_values: Optional[List[float]] = None,\tmin_size_values: Optional[List[float]] = None) -> Dict[str, List[float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_grid_search", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_grid_search", "kind": "function", "doc": "

    Run grid search for automatic mask generation.

    \n\n

    The parameters and their respective value ranges for the grid search are specified via the\n'grid_search_values' argument. For example, to run a grid search over the parameters 'pred_iou_thresh'\nand 'stability_score_thresh', you can pass the following:

    \n\n
    grid_search_values = {\n    \"pred_iou_thresh\": [0.6, 0.7, 0.8, 0.9],\n    \"stability_score_thresh\": [0.6, 0.7, 0.8, 0.9],\n}\n
    \n\n

    All combinations of the parameters will be checked.

    \n\n

    You can use the functions default_grid_search_values_instance_segmentation_with_decoder\nor default_grid_search_values_amg to get the default grid search parameters for the two\nrespective instance segmentation methods.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tsegmenter: Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder],\tgrid_search_values: Dict[str, List],\timage_paths: List[Union[str, os.PathLike]],\tgt_paths: List[Union[str, os.PathLike]],\tresult_dir: Union[str, os.PathLike],\tembedding_dir: Union[str, os.PathLike, NoneType],\tfixed_generate_kwargs: Optional[Dict[str, Any]] = None,\tverbose_gs: bool = False,\timage_key: Optional[str] = None,\tgt_key: Optional[str] = None,\trois: Optional[Tuple[slice, ...]] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_inference", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_inference", "kind": "function", "doc": "

    Run inference for automatic mask generation.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tsegmenter: Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder],\timage_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tgenerate_kwargs: Optional[Dict[str, Any]] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.evaluate_instance_segmentation_grid_search", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "evaluate_instance_segmentation_grid_search", "kind": "function", "doc": "

    Evaluate gridsearch results.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The best parameter setting.\n The evaluation score for the best setting.

    \n
    \n", "signature": "(\tresult_dir: Union[str, os.PathLike],\tgrid_search_parameters: List[str],\tcriterion: str = 'mSA') -> Tuple[Dict[str, Any], float]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_grid_search_and_inference", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_grid_search_and_inference", "kind": "function", "doc": "

    Run grid search and inference for automatic mask generation.

    \n\n

    Please refer to the documentation of run_instance_segmentation_grid_search\nfor details on how to specify the grid search parameters.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tsegmenter: Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder],\tgrid_search_values: Dict[str, List],\tval_image_paths: List[Union[str, os.PathLike]],\tval_gt_paths: List[Union[str, os.PathLike]],\ttest_image_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tresult_dir: Union[str, os.PathLike],\tfixed_generate_kwargs: Optional[Dict[str, Any]] = None,\tverbose_gs: bool = True) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell", "modulename": "micro_sam.evaluation.livecell", "kind": "module", "doc": "

    Inference and evaluation for the LiveCELL dataset and\nthe different cell lines contained in it.

    \n"}, {"fullname": "micro_sam.evaluation.livecell.CELL_TYPES", "modulename": "micro_sam.evaluation.livecell", "qualname": "CELL_TYPES", "kind": "variable", "doc": "

    \n", "default_value": "['A172', 'BT474', 'BV2', 'Huh7', 'MCF7', 'SHSY5Y', 'SkBr3', 'SKOV3']"}, {"fullname": "micro_sam.evaluation.livecell.livecell_inference", "modulename": "micro_sam.evaluation.livecell", "qualname": "livecell_inference", "kind": "function", "doc": "

    Run inference for livecell with a fixed prompt setting.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tuse_points: bool,\tuse_boxes: bool,\tn_positives: Optional[int] = None,\tn_negatives: Optional[int] = None,\tprompt_folder: Union[os.PathLike, str, NoneType] = None,\tpredictor: Optional[segment_anything.predictor.SamPredictor] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_amg", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_amg", "kind": "function", "doc": "

    Run automatic mask generation grid-search and inference for livecell.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path where the predicted images are stored.

    \n
    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None,\tverbose_gs: bool = False,\tn_val_per_cell_type: int = 25) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_instance_segmentation_with_decoder", "kind": "function", "doc": "

    Run automatic mask generation grid-search and inference for livecell.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path where the predicted images are stored.

    \n
    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tverbose_gs: bool = False,\tn_val_per_cell_type: int = 25) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_inference", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_inference", "kind": "function", "doc": "

    Run LiveCELL inference with command line tool.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.evaluate_livecell_predictions", "modulename": "micro_sam.evaluation.livecell", "qualname": "evaluate_livecell_predictions", "kind": "function", "doc": "

    Evaluate LiveCELL predictions.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tgt_dir: Union[os.PathLike, str],\tpred_dir: Union[os.PathLike, str],\tverbose: bool) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_evaluation", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_evaluation", "kind": "function", "doc": "

    Run LiveCELL evaluation with command line tool.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "kind": "module", "doc": "

    Functionality for qualitative comparison of Segment Anything models on microscopy data.

    \n"}, {"fullname": "micro_sam.evaluation.model_comparison.generate_data_for_model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "qualname": "generate_data_for_model_comparison", "kind": "function", "doc": "

    Generate samples for qualitative model comparison.

    \n\n

    This precomputes the input for model_comparison and model_comparison_with_napari.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tloader: torch.utils.data.dataloader.DataLoader,\toutput_folder: Union[str, os.PathLike],\tmodel_type1: str,\tmodel_type2: str,\tn_samples: int) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.model_comparison.model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "qualname": "model_comparison", "kind": "function", "doc": "

    Create images for a qualitative model comparision.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\toutput_folder: Union[str, os.PathLike],\tn_images_per_sample: int,\tmin_size: int,\tplot_folder: Union[os.PathLike, str, NoneType] = None,\tpoint_radius: int = 4,\toutline_dilation: int = 0) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.model_comparison.model_comparison_with_napari", "modulename": "micro_sam.evaluation.model_comparison", "qualname": "model_comparison_with_napari", "kind": "function", "doc": "

    Use napari to display the qualtiative comparison results for two models.

    \n\n
    Arguments:
    \n\n\n", "signature": "(output_folder: Union[str, os.PathLike], show_points: bool = True) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.inference", "modulename": "micro_sam.inference", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.inference.batched_inference", "modulename": "micro_sam.inference", "qualname": "batched_inference", "kind": "function", "doc": "

    Run batched inference for input prompts.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The predicted segmentation masks.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage: numpy.ndarray,\tbatch_size: int,\tboxes: Optional[numpy.ndarray] = None,\tpoints: Optional[numpy.ndarray] = None,\tpoint_labels: Optional[numpy.ndarray] = None,\tmultimasking: bool = False,\tembedding_path: Union[str, os.PathLike, NoneType] = None,\treturn_instance_segmentation: bool = True,\tsegmentation_ids: Optional[list] = None,\treduce_multimasking: bool = True):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation", "modulename": "micro_sam.instance_segmentation", "kind": "module", "doc": "

    Automated instance segmentation functionality.\nThe classes implemented here extend the automatic instance segmentation from Segment Anything:\nhttps://computational-cell-analytics.github.io/micro-sam/micro_sam.html

    \n"}, {"fullname": "micro_sam.instance_segmentation.mask_data_to_segmentation", "modulename": "micro_sam.instance_segmentation", "qualname": "mask_data_to_segmentation", "kind": "function", "doc": "

    Convert the output of the automatic mask generation to an instance segmentation.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The instance segmentation.

    \n
    \n", "signature": "(\tmasks: List[Dict[str, Any]],\twith_background: bool,\tmin_object_size: int = 0,\tmax_object_size: Optional[int] = None) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase", "kind": "class", "doc": "

    Base class for the automatic mask generators.

    \n", "bases": "abc.ABC"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.is_initialized", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.is_initialized", "kind": "variable", "doc": "

    Whether the mask generator has already been initialized.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.crop_list", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.crop_list", "kind": "variable", "doc": "

    The list of mask data after initialization.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.crop_boxes", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.crop_boxes", "kind": "variable", "doc": "

    The list of crop boxes.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.original_size", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.original_size", "kind": "variable", "doc": "

    The original image size.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.get_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.get_state", "kind": "function", "doc": "

    Get the initialized state of the mask generator.

    \n\n
    Returns:
    \n\n
    \n

    State of the mask generator.

    \n
    \n", "signature": "(self) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.set_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.set_state", "kind": "function", "doc": "

    Set the state of the mask generator.

    \n\n
    Arguments:
    \n\n\n", "signature": "(self, state: Dict[str, Any]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator", "kind": "class", "doc": "

    Generates an instance segmentation without prompts, using a point grid.

    \n\n

    This class implements the same logic as\nhttps://github.com/facebookresearch/segment-anything/blob/main/segment_anything/automatic_mask_generator.py\nIt decouples the computationally expensive steps of generating masks from the cheap post-processing operation\nto filter these masks to enable grid search and interactively changing the post-processing.

    \n\n

    Use this class as follows:

    \n\n
    \n
    amg = AutomaticMaskGenerator(predictor)\namg.initialize(image)  # Initialize the masks, this takes care of all expensive computations.\nmasks = amg.generate(pred_iou_thresh=0.8)  # Generate the masks. This is fast and enables testing parameters\n
    \n
    \n\n
    Arguments:
    \n\n\n", "bases": "AMGBase"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tpoints_per_side: Optional[int] = 32,\tpoints_per_batch: Optional[int] = None,\tcrop_n_layers: int = 0,\tcrop_overlap_ratio: float = 0.3413333333333333,\tcrop_n_points_downscale_factor: int = 1,\tpoint_grids: Optional[List[numpy.ndarray]] = None,\tstability_score_offset: float = 1.0)"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator.initialize", "kind": "function", "doc": "

    Initialize image embeddings and masks for an image.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tverbose: bool = False) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator.generate", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator.generate", "kind": "function", "doc": "

    Generate instance segmentation for the currently initialized image.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The instance segmentation masks.

    \n
    \n", "signature": "(\tself,\tpred_iou_thresh: float = 0.88,\tstability_score_thresh: float = 0.95,\tbox_nms_thresh: float = 0.7,\tcrop_nms_thresh: float = 0.7,\tmin_mask_region_area: int = 0,\toutput_mode: str = 'binary_mask') -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.TiledAutomaticMaskGenerator", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledAutomaticMaskGenerator", "kind": "class", "doc": "

    Generates an instance segmentation without prompts, using a point grid.

    \n\n

    Implements the same functionality as AutomaticMaskGenerator but for tiled embeddings.

    \n\n
    Arguments:
    \n\n\n", "bases": "AutomaticMaskGenerator"}, {"fullname": "micro_sam.instance_segmentation.TiledAutomaticMaskGenerator.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledAutomaticMaskGenerator.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tpoints_per_side: Optional[int] = 32,\tpoints_per_batch: int = 64,\tpoint_grids: Optional[List[numpy.ndarray]] = None,\tstability_score_offset: float = 1.0)"}, {"fullname": "micro_sam.instance_segmentation.TiledAutomaticMaskGenerator.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledAutomaticMaskGenerator.initialize", "kind": "function", "doc": "

    Initialize image embeddings and masks for an image.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = False,\tembedding_save_path: Optional[str] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_amg", "modulename": "micro_sam.instance_segmentation", "qualname": "get_amg", "kind": "function", "doc": "

    Get the automatic mask generator class.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The automatic mask generator.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tis_tiled: bool,\t**kwargs) -> micro_sam.instance_segmentation.AMGBase:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter", "kind": "class", "doc": "

    Adapter to contain the UNETR decoder in a single module.

    \n\n

    To apply the decoder on top of pre-computed embeddings for\nthe segmentation functionality.\nSee also: https://github.com/constantinpape/torch-em/blob/main/torch_em/model/unetr.py

    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.__init__", "kind": "function", "doc": "

    Initializes internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(unetr)"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.base", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.base", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.out_conv", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.out_conv", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv_out", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv_out", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.decoder_head", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.decoder_head", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.final_activation", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.final_activation", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.postprocess_masks", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.postprocess_masks", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.decoder", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv1", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv1", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv2", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv2", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv3", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv3", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv4", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv4", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.forward", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.forward", "kind": "function", "doc": "

    Defines the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, input_, input_shape, original_shape):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.load_instance_segmentation_with_decoder_from_checkpoint", "modulename": "micro_sam.instance_segmentation", "qualname": "load_instance_segmentation_with_decoder_from_checkpoint", "kind": "function", "doc": "

    Load InstanceSegmentationWithDecoder from a training.JointSamTrainer checkpoint.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    InstanceSegmentationWithDecoder

    \n
    \n", "signature": "(\tcheckpoint: Union[os.PathLike, str],\tmodel_type: str,\tdevice: Union[str, torch.device, NoneType] = None):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder", "kind": "class", "doc": "

    Generates an instance segmentation without prompts, using a decoder.

    \n\n

    Implements the same interface as AutomaticMaskGenerator.

    \n\n

    Use this class as follows:

    \n\n
    \n
    segmenter = InstanceSegmentationWithDecoder(predictor, decoder)\nsegmenter.initialize(image)   # Predict the image embeddings and decoder outputs.\nmasks = segmenter.generate(center_distance_threshold=0.75)  # Generate the instance segmentation.\n
    \n
    \n\n
    Arguments:
    \n\n\n"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tdecoder: torch.nn.modules.module.Module)"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.is_initialized", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.is_initialized", "kind": "variable", "doc": "

    Whether the mask generator has already been initialized.

    \n"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.initialize", "kind": "function", "doc": "

    Initialize image embeddings and decoder predictions for an image.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.generate", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.generate", "kind": "function", "doc": "

    Generate instance segmentation for the currently initialized image.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The instance segmentation masks.

    \n
    \n", "signature": "(\tself,\tcenter_distance_threshold: float = 0.5,\tboundary_distance_threshold: float = 0.5,\tforeground_threshold: float = 0.5,\tdistance_smoothing: float = 1.6,\tmin_size: int = 0,\toutput_mode: Optional[str] = 'binary_mask') -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation", "modulename": "micro_sam.multi_dimensional_segmentation", "kind": "module", "doc": "

    Multi-dimensional segmentation with segment anything.

    \n"}, {"fullname": "micro_sam.multi_dimensional_segmentation.segment_mask_in_volume", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "segment_mask_in_volume", "kind": "function", "doc": "

    Segment an object mask in in volumetric data.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    Array with the volumetric segmentation

    \n
    \n", "signature": "(\tsegmentation: numpy.ndarray,\tpredictor: segment_anything.predictor.SamPredictor,\timage_embeddings: Dict[str, Any],\tsegmented_slices: numpy.ndarray,\tstop_lower: bool,\tstop_upper: bool,\tiou_threshold: float,\tprojection: str,\tprogress_bar: Optional[Any] = None,\tbox_extension: float = 0.0) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation.segment_3d_from_slice", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "segment_3d_from_slice", "kind": "function", "doc": "

    Segment all objects in a volume intersecting with a specific slice.

    \n\n

    This function first segments the objects in the specified slice using the\nautomatic instance segmentation functionality. Then it segments all objects that\nwere found in that slice in the volume.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    Segmentation volume.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\traw: numpy.ndarray,\tz: Optional[int] = None,\tembedding_path: Union[os.PathLike, str, NoneType] = None,\tprojection: str = 'mask',\tbox_extension: float = 0.0,\tverbose: bool = True,\tpred_iou_thresh: float = 0.88,\tstability_score_thresh: float = 0.95,\tmin_object_size_z: int = 50,\tmax_object_size_z: Optional[int] = None,\tiou_threshold: float = 0.8):", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state", "modulename": "micro_sam.precompute_state", "kind": "module", "doc": "

    Precompute image embeddings and automatic mask generator state for image data.

    \n"}, {"fullname": "micro_sam.precompute_state.cache_amg_state", "modulename": "micro_sam.precompute_state", "qualname": "cache_amg_state", "kind": "function", "doc": "

    Compute and cache or load the state for the automatic mask generator.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The automatic mask generator class with the cached state.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\traw: numpy.ndarray,\timage_embeddings: Dict[str, Any],\tsave_path: Union[str, os.PathLike],\tverbose: bool = True,\ti: Optional[int] = None,\t**kwargs) -> micro_sam.instance_segmentation.AMGBase:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state.precompute_state", "modulename": "micro_sam.precompute_state", "qualname": "precompute_state", "kind": "function", "doc": "

    Precompute the image embeddings and other optional state for the input image(s).

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tinput_path: Union[os.PathLike, str],\toutput_path: Union[os.PathLike, str],\tmodel_type: str = 'vit_h',\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\tkey: Optional[str] = None,\tndim: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tprecompute_amg_state: bool = False) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation", "modulename": "micro_sam.prompt_based_segmentation", "kind": "module", "doc": "

    Functions for prompt-based segmentation with Segment Anything.

    \n"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_points", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_points", "kind": "function", "doc": "

    Segmentation from point prompts.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tpoints: numpy.ndarray,\tlabels: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\tuse_best_multimask: Optional[bool] = None):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_mask", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_mask", "kind": "function", "doc": "

    Segmentation from a mask prompt.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tmask: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tuse_box: bool = True,\tuse_mask: bool = True,\tuse_points: bool = False,\toriginal_size: Optional[Tuple[int, ...]] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\treturn_logits: bool = False,\tbox_extension: float = 0.0,\tbox: Optional[numpy.ndarray] = None,\tpoints: Optional[numpy.ndarray] = None,\tlabels: Optional[numpy.ndarray] = None):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_box", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_box", "kind": "function", "doc": "

    Segmentation from a box prompt.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tbox: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\toriginal_size: Optional[Tuple[int, ...]] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\tbox_extension: float = 0.0):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_box_and_points", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_box_and_points", "kind": "function", "doc": "

    Segmentation from a box prompt and point prompts.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tbox: numpy.ndarray,\tpoints: numpy.ndarray,\tlabels: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\toriginal_size: Optional[Tuple[int, ...]] = None,\tmultimask_output: bool = False,\treturn_all: bool = False):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_generators", "modulename": "micro_sam.prompt_generators", "kind": "module", "doc": "

    Classes for generating prompts from ground-truth segmentation masks.\nFor training or evaluation of prompt-based segmentation.

    \n"}, {"fullname": "micro_sam.prompt_generators.PromptGeneratorBase", "modulename": "micro_sam.prompt_generators", "qualname": "PromptGeneratorBase", "kind": "class", "doc": "

    PromptGeneratorBase is an interface to implement specific prompt generators.

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator", "kind": "class", "doc": "

    Generate point and/or box prompts from an instance segmentation.

    \n\n

    You can use this class to derive prompts from an instance segmentation, either for\nevaluation purposes or for training Segment Anything on custom data.\nIn order to use this generator you need to precompute the bounding boxes and center\ncoordiantes of the instance segmentation, using e.g. util.get_centers_and_bounding_boxes.

    \n\n

    Here's an example for how to use this class:

    \n\n
    \n
    # Initialize generator for 1 positive and 4 negative point prompts.\nprompt_generator = PointAndBoxPromptGenerator(1, 4, dilation_strength=8)\n\n# Precompute the bounding boxes for the given segmentation\nbounding_boxes, _ = util.get_centers_and_bounding_boxes(segmentation)\n\n# generate point prompts for the objects with ids 1, 2 and 3\nseg_ids = (1, 2, 3)\nobject_mask = np.stack([segmentation == seg_id for seg_id in seg_ids])[:, None]\nthis_bounding_boxes = [bounding_boxes[seg_id] for seg_id in seg_ids]\npoint_coords, point_labels, _, _ = prompt_generator(object_mask, this_bounding_boxes)\n
    \n
    \n\n
    Arguments:
    \n\n\n", "bases": "PromptGeneratorBase"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.__init__", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tn_positive_points: int,\tn_negative_points: int,\tdilation_strength: int,\tget_point_prompts: bool = True,\tget_box_prompts: bool = False)"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.n_positive_points", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.n_positive_points", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.n_negative_points", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.n_negative_points", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.dilation_strength", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.dilation_strength", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.get_box_prompts", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.get_box_prompts", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.get_point_prompts", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.get_point_prompts", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.IterativePromptGenerator", "modulename": "micro_sam.prompt_generators", "qualname": "IterativePromptGenerator", "kind": "class", "doc": "

    Generate point prompts from an instance segmentation iteratively.

    \n", "bases": "PromptGeneratorBase"}, {"fullname": "micro_sam.sam_annotator", "modulename": "micro_sam.sam_annotator", "kind": "module", "doc": "

    The interactive annotation tools.

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator", "modulename": "micro_sam.sam_annotator.annotator", "kind": "module", "doc": "

    The main GUI for starting annotation tools.

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator.annotator", "modulename": "micro_sam.sam_annotator.annotator", "qualname": "annotator", "kind": "function", "doc": "

    Start the main micro_sam GUI.

    \n\n

    From this GUI you can select which annotation tool you want to use and then\nselect the parameters for the tool.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.annotator_2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_2d.annotator_2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "qualname": "annotator_2d", "kind": "function", "doc": "

    The 2d annotation tool.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\traw: numpy.ndarray,\tembedding_path: Optional[str] = None,\tshow_embeddings: bool = False,\tsegmentation_result: Optional[numpy.ndarray] = None,\tmodel_type: str = 'vit_h',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tv: Optional[napari.viewer.Viewer] = None,\tpredictor: Optional[segment_anything.predictor.SamPredictor] = None,\tprecompute_amg_state: bool = False) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.annotator_3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_3d.annotator_3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "qualname": "annotator_3d", "kind": "function", "doc": "

    The 3d annotation tool.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\traw: numpy.ndarray,\tembedding_path: Optional[str] = None,\tshow_embeddings: bool = False,\tsegmentation_result: Optional[numpy.ndarray] = None,\tmodel_type: str = 'vit_h',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tpredictor: Optional[segment_anything.predictor.SamPredictor] = None) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking.annotator_tracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "qualname": "annotator_tracking", "kind": "function", "doc": "

    The annotation tool for tracking in timeseries data.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\traw: numpy.ndarray,\tembedding_path: Optional[str] = None,\tshow_embeddings: bool = False,\ttracking_result: Optional[str] = None,\tmodel_type: str = 'vit_h',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tpredictor: Optional[segment_anything.predictor.SamPredictor] = None) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.gui_utils", "modulename": "micro_sam.sam_annotator.gui_utils", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.gui_utils.show_wrong_file_warning", "modulename": "micro_sam.sam_annotator.gui_utils", "qualname": "show_wrong_file_warning", "kind": "function", "doc": "

    Show dialog if the data signature does not match the signature stored in the file.

    \n\n
    The user can choose from the following options in this dialog:
    \n\n
    \n \n
    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    Path to a file (new or old) depending on user decision

    \n
    \n", "signature": "(file_path: Union[str, os.PathLike]) -> Union[str, os.PathLike]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.image_series_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "image_series_annotator", "kind": "function", "doc": "

    Run the 2d annotation tool for a series of images.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\timage_files: List[Union[os.PathLike, str]],\toutput_folder: str,\tembedding_path: Optional[str] = None,\tpredictor: Optional[segment_anything.predictor.SamPredictor] = None,\t**kwargs) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.image_folder_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "image_folder_annotator", "kind": "function", "doc": "

    Run the 2d annotation tool for a series of images in a folder.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tinput_folder: str,\toutput_folder: str,\tpattern: str = '*',\tembedding_path: Optional[str] = None,\tpredictor: Optional[segment_anything.predictor.SamPredictor] = None,\t**kwargs) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util", "modulename": "micro_sam.sam_annotator.util", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.util.point_layer_to_prompts", "modulename": "micro_sam.sam_annotator.util", "qualname": "point_layer_to_prompts", "kind": "function", "doc": "

    Extract point prompts for SAM from a napari point layer.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The point coordinates for the prompts.\n The labels (positive or negative / 1 or 0) for the prompts.

    \n
    \n", "signature": "(\tlayer: napari.layers.points.points.Points,\ti=None,\ttrack_id=None,\twith_stop_annotation=True) -> Optional[Tuple[numpy.ndarray, numpy.ndarray]]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.shape_layer_to_prompts", "modulename": "micro_sam.sam_annotator.util", "qualname": "shape_layer_to_prompts", "kind": "function", "doc": "

    Extract prompts for SAM from a napari shape layer.

    \n\n

    Extracts the bounding box for 'rectangle' shapes and the bounding box and corresponding mask\nfor 'ellipse' and 'polygon' shapes.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The box prompts.\n The mask prompts.

    \n
    \n", "signature": "(\tlayer: napari.layers.shapes.shapes.Shapes,\tshape: Tuple[int, int],\ti=None,\ttrack_id=None) -> Tuple[List[numpy.ndarray], List[Optional[numpy.ndarray]]]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.prompt_layer_to_state", "modulename": "micro_sam.sam_annotator.util", "qualname": "prompt_layer_to_state", "kind": "function", "doc": "

    Get the state of the track from a point layer for a given timeframe.

    \n\n

    Only relevant for annotator_tracking.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The state of this frame (either \"division\" or \"track\").

    \n
    \n", "signature": "(prompt_layer: napari.layers.points.points.Points, i: int) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.prompt_layers_to_state", "modulename": "micro_sam.sam_annotator.util", "qualname": "prompt_layers_to_state", "kind": "function", "doc": "

    Get the state of the track from a point layer and shape layer for a given timeframe.

    \n\n

    Only relevant for annotator_tracking.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The state of this frame (either \"division\" or \"track\").

    \n
    \n", "signature": "(\tpoint_layer: napari.layers.points.points.Points,\tbox_layer: napari.layers.shapes.shapes.Shapes,\ti: int) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data", "modulename": "micro_sam.sample_data", "kind": "module", "doc": "

    Sample microscopy data.

    \n\n

    You can change the download location for sample data and model weights\nby setting the environment variable: MICROSAM_CACHEDIR

    \n\n

    By default sample data is downloaded to a folder named 'micro_sam/sample_data'\ninside your default cache directory, eg:\n * Mac: ~/Library/Caches/\n * Unix: ~/.cache/ or the value of the XDG_CACHE_HOME environment variable, if defined.\n * Windows: C:\\Users<user>\\AppData\\Local<AppAuthor><AppName>\\Cache

    \n"}, {"fullname": "micro_sam.sample_data.fetch_image_series_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_image_series_example_data", "kind": "function", "doc": "

    Download the sample images for the image series annotator.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_image_series", "modulename": "micro_sam.sample_data", "qualname": "sample_data_image_series", "kind": "function", "doc": "

    Provides image series example image to napari.

    \n\n

    Opens as three separate image layers in napari (one per image in series).\nThe third image in the series has a different size and modality.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_wholeslide_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_wholeslide_example_data", "kind": "function", "doc": "

    Download the sample data for the 2d annotator.

    \n\n

    This downloads part of a whole-slide image from the NeurIPS Cell Segmentation Challenge.\nSee https://neurips22-cellseg.grand-challenge.org/ for details on the data.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_wholeslide", "modulename": "micro_sam.sample_data", "qualname": "sample_data_wholeslide", "kind": "function", "doc": "

    Provides wholeslide 2d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_livecell_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_livecell_example_data", "kind": "function", "doc": "

    Download the sample data for the 2d annotator.

    \n\n

    This downloads a single image from the LiveCELL dataset.\nSee https://doi.org/10.1038/s41592-021-01249-6 for details on the data.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_livecell", "modulename": "micro_sam.sample_data", "qualname": "sample_data_livecell", "kind": "function", "doc": "

    Provides livecell 2d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_hela_2d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_hela_2d_example_data", "kind": "function", "doc": "

    Download the sample data for the 2d annotator.

    \n\n

    This downloads a single image from the HeLa CTC dataset.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> Union[str, os.PathLike]:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_hela_2d", "modulename": "micro_sam.sample_data", "qualname": "sample_data_hela_2d", "kind": "function", "doc": "

    Provides HeLa 2d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_3d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_3d_example_data", "kind": "function", "doc": "

    Download the sample data for the 3d annotator.

    \n\n

    This downloads the Lucchi++ datasets from https://casser.io/connectomics/.\nIt is a dataset for mitochondria segmentation in EM.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_3d", "modulename": "micro_sam.sample_data", "qualname": "sample_data_3d", "kind": "function", "doc": "

    Provides Lucchi++ 3d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_tracking_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_tracking_example_data", "kind": "function", "doc": "

    Download the sample data for the tracking annotator.

    \n\n

    This data is the cell tracking challenge dataset DIC-C2DH-HeLa.\nCell tracking challenge webpage: http://data.celltrackingchallenge.net\nHeLa cells on a flat glass\nDr. G. van Cappellen. Erasmus Medical Center, Rotterdam, The Netherlands\nTraining dataset: http://data.celltrackingchallenge.net/training-datasets/DIC-C2DH-HeLa.zip (37 MB)\nChallenge dataset: http://data.celltrackingchallenge.net/challenge-datasets/DIC-C2DH-HeLa.zip (41 MB)

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_tracking", "modulename": "micro_sam.sample_data", "qualname": "sample_data_tracking", "kind": "function", "doc": "

    Provides tracking example dataset to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_tracking_segmentation_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_tracking_segmentation_data", "kind": "function", "doc": "

    Download groundtruth segmentation for the tracking example data.

    \n\n

    This downloads the groundtruth segmentation for the image data from fetch_tracking_example_data.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_segmentation", "modulename": "micro_sam.sample_data", "qualname": "sample_data_segmentation", "kind": "function", "doc": "

    Provides segmentation example dataset to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.synthetic_data", "modulename": "micro_sam.sample_data", "qualname": "synthetic_data", "kind": "function", "doc": "

    Create synthetic image data and segmentation for training.

    \n", "signature": "(shape, seed=None):", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_nucleus_3d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_nucleus_3d_example_data", "kind": "function", "doc": "

    Download the sample data for 3d segmentation of nuclei.

    \n\n

    This data contains a small crop from a volume from the publication\n\"Efficient automatic 3D segmentation of cell nuclei for high-content screening\"\nhttps://doi.org/10.1186/s12859-022-04737-4

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.training", "modulename": "micro_sam.training", "kind": "module", "doc": "

    Functionality for training Segment Anything.

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer", "modulename": "micro_sam.training.joint_sam_trainer", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer", "kind": "class", "doc": "

    Trainer class for training the Segment Anything model.

    \n\n

    This class is derived from torch_em.trainer.DefaultTrainer.\nCheck out https://github.com/constantinpape/torch-em/blob/main/torch_em/trainer/default_trainer.py\nfor details on its usage and implementation.

    \n\n
    Arguments:
    \n\n\n", "bases": "micro_sam.training.sam_trainer.SamTrainer"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.__init__", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tunetr: torch.nn.modules.module.Module,\tinstance_loss: torch.nn.modules.module.Module,\tinstance_metric: torch.nn.modules.module.Module,\t**kwargs)"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.unetr", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.unetr", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.instance_loss", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.instance_loss", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.instance_metric", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.instance_metric", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.save_checkpoint", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.save_checkpoint", "kind": "function", "doc": "

    \n", "signature": "(self, name, current_metric, best_metric, **extra_save_dict):", "funcdef": "def"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.load_checkpoint", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.load_checkpoint", "kind": "function", "doc": "

    \n", "signature": "(self, checkpoint='best'):", "funcdef": "def"}, {"fullname": "micro_sam.training.sam_trainer", "modulename": "micro_sam.training.sam_trainer", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer", "kind": "class", "doc": "

    Trainer class for training the Segment Anything model.

    \n\n

    This class is derived from torch_em.trainer.DefaultTrainer.\nCheck out https://github.com/constantinpape/torch-em/blob/main/torch_em/trainer/default_trainer.py\nfor details on its usage and implementation.

    \n\n
    Arguments:
    \n\n\n", "bases": "torch_em.trainer.default_trainer.DefaultTrainer"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.__init__", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tconvert_inputs,\tn_sub_iteration: int,\tn_objects_per_batch: Optional[int] = None,\tmse_loss: torch.nn.modules.module.Module = MSELoss(),\tprompt_generator: micro_sam.prompt_generators.PromptGeneratorBase = <micro_sam.prompt_generators.IterativePromptGenerator object>,\tmask_prob: float = 0.5,\t**kwargs)"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.convert_inputs", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.convert_inputs", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.mse_loss", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.mse_loss", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.n_objects_per_batch", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.n_objects_per_batch", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.n_sub_iteration", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.n_sub_iteration", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.prompt_generator", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.prompt_generator", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.mask_prob", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.mask_prob", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam", "modulename": "micro_sam.training.trainable_sam", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM", "kind": "class", "doc": "

    Wrapper to make the SegmentAnything model trainable.

    \n\n
    Arguments:
    \n\n\n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.__init__", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.__init__", "kind": "function", "doc": "

    Initializes internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(\tsam: segment_anything.modeling.sam.Sam,\tdevice: Union[str, torch.device])"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.sam", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.sam", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.device", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.device", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.transform", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.transform", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.preprocess", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.preprocess", "kind": "function", "doc": "

    Resize, normalize pixel values and pad to a square input.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The resized, normalized and padded tensor.\n The shape of the image after resizing.

    \n
    \n", "signature": "(self, x: torch.Tensor) -> Tuple[torch.Tensor, Tuple[int, int]]:", "funcdef": "def"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.image_embeddings_oft", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.image_embeddings_oft", "kind": "function", "doc": "

    \n", "signature": "(self, batched_inputs):", "funcdef": "def"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.forward", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.forward", "kind": "function", "doc": "

    Forward pass.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The predicted segmentation masks and iou values.

    \n
    \n", "signature": "(\tself,\tbatched_inputs: List[Dict[str, Any]],\timage_embeddings: torch.Tensor,\tmultimask_output: bool = False) -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.training.util", "modulename": "micro_sam.training.util", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.identity", "modulename": "micro_sam.training.util", "qualname": "identity", "kind": "function", "doc": "

    Identity transformation.

    \n\n

    This is a helper function to skip data normalization when finetuning SAM.\nData normalization is performed within the model and should thus be skipped as\na preprocessing step in training.

    \n", "signature": "(x):", "funcdef": "def"}, {"fullname": "micro_sam.training.util.get_trainable_sam_model", "modulename": "micro_sam.training.util", "qualname": "get_trainable_sam_model", "kind": "function", "doc": "

    Get the trainable sam model.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The trainable segment anything model.

    \n
    \n", "signature": "(\tmodel_type: str = 'vit_h',\tdevice: Union[str, torch.device, NoneType] = None,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\tfreeze: Optional[List[str]] = None) -> micro_sam.training.trainable_sam.TrainableSAM:", "funcdef": "def"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs", "kind": "class", "doc": "

    Convert outputs of data loader to the expected batched inputs of the SegmentAnything model.

    \n\n
    Arguments:
    \n\n\n"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.__init__", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.__init__", "kind": "function", "doc": "

    \n", "signature": "(\ttransform: Optional[segment_anything.utils.transforms.ResizeLongestSide],\tdilation_strength: int = 10,\tbox_distortion_factor: Optional[float] = None)"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.dilation_strength", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.dilation_strength", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.transform", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.transform", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.box_distortion_factor", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.box_distortion_factor", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo", "kind": "class", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.__init__", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.__init__", "kind": "function", "doc": "

    \n", "signature": "(desired_shape, do_rescaling=True, padding='constant')"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.desired_shape", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.desired_shape", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.padding", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.padding", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.do_rescaling", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.do_rescaling", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo", "kind": "class", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.__init__", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.__init__", "kind": "function", "doc": "

    \n", "signature": "(desired_shape, padding='constant', min_size=0)"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.desired_shape", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.desired_shape", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.padding", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.padding", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.min_size", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.min_size", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.util", "modulename": "micro_sam.util", "kind": "module", "doc": "

    Helper functions for downloading Segment Anything models and predicting image embeddings.

    \n"}, {"fullname": "micro_sam.util.get_cache_directory", "modulename": "micro_sam.util", "qualname": "get_cache_directory", "kind": "function", "doc": "

    Get micro-sam cache directory location.

    \n\n

    Users can set the MICROSAM_CACHEDIR environment variable for a custom cache directory.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.microsam_cachedir", "modulename": "micro_sam.util", "qualname": "microsam_cachedir", "kind": "function", "doc": "

    Return the micro-sam cache directory.

    \n\n

    Returns the top level cache directory for micro-sam models and sample data.

    \n\n

    Every time this function is called, we check for any user updates made to\nthe MICROSAM_CACHEDIR os environment variable since the last time.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.models", "modulename": "micro_sam.util", "qualname": "models", "kind": "function", "doc": "

    Return the segmentation models registry.

    \n\n

    We recreate the model registry every time this function is called,\nso any user changes to the default micro-sam cache directory location\nare respected.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.util.get_device", "modulename": "micro_sam.util", "qualname": "get_device", "kind": "function", "doc": "

    Get the torch device.

    \n\n

    If no device is passed the default device for your system is used.\nElse it will be checked if the device you have passed is supported.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The device.

    \n
    \n", "signature": "(\tdevice: Union[str, torch.device, NoneType] = None) -> Union[str, torch.device]:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_sam_model", "modulename": "micro_sam.util", "qualname": "get_sam_model", "kind": "function", "doc": "

    Get the SegmentAnything Predictor.

    \n\n

    This function will download the required model or load it from the cached weight file.\nThis location of the cache can be changed by setting the environment variable: MICROSAM_CACHEDIR.\nThe name of the requested model can be set via model_type.\nSee https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models\nfor an overview of the available models

    \n\n

    Alternatively this function can also load a model from weights stored in a local filepath.\nThe corresponding file path is given via checkpoint_path. In this case model_type\nmust be given as the matching encoder architecture, e.g. \"vit_b\" if the weights are for\na SAM model with vit_b encoder.

    \n\n

    By default the models are downloaded to a folder named 'micro_sam/models'\ninside your default cache directory, eg:

    \n\n\n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The segment anything predictor.

    \n
    \n", "signature": "(\tmodel_type: str = 'vit_h',\tdevice: Union[str, torch.device, NoneType] = None,\tcheckpoint_path: Union[str, os.PathLike, NoneType] = None,\treturn_sam: bool = False) -> mobile_sam.predictor.SamPredictor:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_custom_sam_model", "modulename": "micro_sam.util", "qualname": "get_custom_sam_model", "kind": "function", "doc": "

    Load a SAM model from a torch_em checkpoint.

    \n\n

    This function enables loading from the checkpoints saved by\nthe functionality in micro_sam.training.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The segment anything predictor.

    \n
    \n", "signature": "(\tcheckpoint_path: Union[str, os.PathLike],\tmodel_type: str = 'vit_h',\tdevice: Union[str, torch.device, NoneType] = None,\treturn_sam: bool = False,\treturn_state: bool = False) -> mobile_sam.predictor.SamPredictor:", "funcdef": "def"}, {"fullname": "micro_sam.util.export_custom_sam_model", "modulename": "micro_sam.util", "qualname": "export_custom_sam_model", "kind": "function", "doc": "

    Export a finetuned segment anything model to the standard model format.

    \n\n

    The exported model can be used by the interactive annotation tools in micro_sam.annotator.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tcheckpoint_path: Union[str, os.PathLike],\tmodel_type: str,\tsave_path: Union[str, os.PathLike]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_model_names", "modulename": "micro_sam.util", "qualname": "get_model_names", "kind": "function", "doc": "

    \n", "signature": "() -> Iterable:", "funcdef": "def"}, {"fullname": "micro_sam.util.precompute_image_embeddings", "modulename": "micro_sam.util", "qualname": "precompute_image_embeddings", "kind": "function", "doc": "

    Compute the image embeddings (output of the encoder) for the input.

    \n\n

    If 'save_path' is given the embeddings will be loaded/saved in a zarr container.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tpredictor: mobile_sam.predictor.SamPredictor,\tinput_: numpy.ndarray,\tsave_path: Optional[str] = None,\tlazy_loading: bool = False,\tndim: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\twrong_file_callback: Optional[Callable] = None) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.util.set_precomputed", "modulename": "micro_sam.util", "qualname": "set_precomputed", "kind": "function", "doc": "

    Set the precomputed image embeddings for a predictor.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tpredictor: mobile_sam.predictor.SamPredictor,\timage_embeddings: Dict[str, Any],\ti: Optional[int] = None):", "funcdef": "def"}, {"fullname": "micro_sam.util.compute_iou", "modulename": "micro_sam.util", "qualname": "compute_iou", "kind": "function", "doc": "

    Compute the intersection over union of two masks.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The intersection over union of the two masks.

    \n
    \n", "signature": "(mask1: numpy.ndarray, mask2: numpy.ndarray) -> float:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_centers_and_bounding_boxes", "modulename": "micro_sam.util", "qualname": "get_centers_and_bounding_boxes", "kind": "function", "doc": "

    Returns the center coordinates of the foreground instances in the ground-truth.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    A dictionary that maps object ids to the corresponding centroid.\n A dictionary that maps object_ids to the corresponding bounding box.

    \n
    \n", "signature": "(\tsegmentation: numpy.ndarray,\tmode: str = 'v') -> Tuple[Dict[int, numpy.ndarray], Dict[int, tuple]]:", "funcdef": "def"}, {"fullname": "micro_sam.util.load_image_data", "modulename": "micro_sam.util", "qualname": "load_image_data", "kind": "function", "doc": "

    Helper function to load image data from file.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The image data.

    \n
    \n", "signature": "(\tpath: str,\tkey: Optional[str] = None,\tlazy_loading: bool = False) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.util.segmentation_to_one_hot", "modulename": "micro_sam.util", "qualname": "segmentation_to_one_hot", "kind": "function", "doc": "

    Convert the segmentation to one-hot encoded masks.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The one-hot encoded masks.

    \n
    \n", "signature": "(\tsegmentation: numpy.ndarray,\tsegmentation_ids: Optional[numpy.ndarray] = None) -> torch.Tensor:", "funcdef": "def"}, {"fullname": "micro_sam.visualization", "modulename": "micro_sam.visualization", "kind": "module", "doc": "

    Functionality for visualizing image embeddings.

    \n"}, {"fullname": "micro_sam.visualization.compute_pca", "modulename": "micro_sam.visualization", "qualname": "compute_pca", "kind": "function", "doc": "

    Compute the pca projection of the embeddings to visualize them as RGB image.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    PCA of the embeddings, mapped to the pixels.

    \n
    \n", "signature": "(embeddings: numpy.ndarray) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.visualization.project_embeddings_for_visualization", "modulename": "micro_sam.visualization", "qualname": "project_embeddings_for_visualization", "kind": "function", "doc": "

    Project image embeddings to pixel-wise PCA.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The PCA of the embeddings.\n The scale factor for resizing to the original image size.

    \n
    \n", "signature": "(\timage_embeddings: Dict[str, Any]) -> Tuple[numpy.ndarray, Tuple[float, ...]]:", "funcdef": "def"}]; + /** pdoc search index */const docs = [{"fullname": "micro_sam", "modulename": "micro_sam", "kind": "module", "doc": "

    Segment Anything for Microscopy

    \n\n

    Segment Anything for Microscopy implements automatic and interactive annotation for microscopy data. It is built on top of Segment Anything by Meta AI and specializes it for microscopy and other bio-imaging data.\nIts core components are:

    \n\n\n\n

    Our goal is to build fast and interactive annotation tools for microscopy data, like interactive cell segmentation from bounding boxes:

    \n\n

    \"box-prompts\"

    \n\n

    micro_sam is under active development, but our goal is to keep the changes to the user interface and the interface of the python library as small as possible.\nOn our roadmap for more functionality are:

    \n\n\n\n

    If you run into any problems or have questions please open an issue on Github or reach out via image.sc using the tag micro-sam and tagging @constantinpape.

    \n\n

    Quickstart

    \n\n

    You can install micro_sam via conda:

    \n\n
    $ conda install -c conda-forge micro_sam napari pyqt\n
    \n\n

    We also provide experimental installers for all operating systems.\nFor more details on the available installation options check out the installation section.

    \n\n

    After installing micro_sam you can run the annotation tool via $ micro_sam.annotator, which opens a menu for selecting the annotation tool and its inputs.\nSee the annotation tool section for an overview and explanation of the annotation functionality.

    \n\n

    The micro_sam python library can be used via

    \n\n
    \n
    import micro_sam\n
    \n
    \n\n

    It is explained in more detail here.

    \n\n

    Our support for finetuned models is still experimental. We will soon release better finetuned models and host them on zenodo.\nFor now, check out the example script for the 2d annotator to see how the finetuned models can be used within micro_sam.

    \n\n

    Citation

    \n\n

    If you are using micro_sam in your research please cite

    \n\n\n\n

    Installation

    \n\n

    There are three ways to install micro_sam:

    \n\n\n\n

    From mamba

    \n\n

    mamba is a drop-in replacement for conda, but much faster.\nWhile the steps below may also work with conda, we highly recommend using mamba.\nYou can follow the instructions here to install mamba.

    \n\n

    IMPORTANT: Make sure to avoid installing anything in the base environment.

    \n\n

    micro_sam can be installed in an existing environment via:

    \n\n
    $ mamba install -c conda-forge micro_sam\n
    \n\n

    or you should create a new environment (here called micro-sam) via:

    \n\n
    $ mamba create -c conda-forge -n micro-sam micro_sam\n
    \n\n

    if you want to use the GPU you need to install PyTorch from the pytorch channel instead of conda-forge. For example:

    \n\n
    $ mamba create -c pytorch -c nvidia -c conda-forge micro_sam pytorch pytorch-cuda=12.1\n
    \n\n

    You may need to change this command to install the correct CUDA version for your computer, see https://pytorch.org/ for details.

    \n\n

    You also need to install napari to use the annotation tool:

    \n\n
    $ mamba install -c conda-forge napari pyqt\n
    \n\n

    (We don't include napari in the default installation dependencies to keep the choice of rendering backend flexible.)

    \n\n

    From source

    \n\n

    To install micro_sam from source, we recommend to first set up an environment with the necessary requirements:

    \n\n\n\n

    To create one of these environments and install micro_sam into it follow these steps

    \n\n
      \n
    1. Clone the repository:
    2. \n
    \n\n
    $ git clone https://github.com/computational-cell-analytics/micro-sam\n
    \n\n
      \n
    1. Enter it:
    2. \n
    \n\n
    $ cd micro-sam\n
    \n\n
      \n
    1. Create the GPU or CPU environment:
    2. \n
    \n\n
    $ mamba env create -f <ENV_FILE>.yaml\n
    \n\n
      \n
    1. Activate the environment:
    2. \n
    \n\n
    $ mamba activate sam\n
    \n\n
      \n
    1. Install micro_sam:
    2. \n
    \n\n
    $ pip install -e .\n
    \n\n

    Troubleshooting:

    \n\n\n\n

    From installer

    \n\n

    We also provide installers for Linux and Windows:

    \n\n\n\n

    The installers are still experimental and not fully tested. Mac is not supported yet, but we are working on also providing an installer for it.

    \n\n

    If you encounter problems with them then please consider installing micro_sam via mamba instead.

    \n\n

    Linux Installer:

    \n\n

    To use the installer:

    \n\n\n\n

    \n\n

    Windows Installer:

    \n\n\n\n

    Annotation Tools

    \n\n

    micro_sam provides applications for fast interactive 2d segmentation, 3d segmentation and tracking.\nSee an example for interactive cell segmentation in phase-contrast microscopy (left), interactive segmentation\nof mitochondria in volume EM (middle) and interactive tracking of cells (right).

    \n\n

    \n\n

    \n\n

    The annotation tools can be started from the micro_sam GUI, the command line or from python scripts. The micro_sam GUI can be started by

    \n\n
    $ micro_sam.annotator\n
    \n\n

    They are built using napari and magicgui to provide the viewer and user interface.\nIf you are not familiar with napari yet, start here.\nThe micro_sam tools use the point layer, shape layer and label layer.

    \n\n

    The annotation tools are explained in detail below. In addition to the documentation here we also provide video tutorials.

    \n\n

    Starting via GUI

    \n\n

    The annotation toools can be started from a central GUI, which can be started with the command $ micro_sam.annotator or using the executable from an installer.

    \n\n

    In the GUI you can select with of the four annotation tools you want to use:\n

    \n\n

    And after selecting them a new window will open where you can select the input file path and other optional parameter. Then click the top button to start the tool. Note: If you are not starting the annotation tool with a path to pre-computed embeddings then it can take several minutes to open napari after pressing the button because the embeddings are being computed.

    \n\n

    Changes in version 0.3:

    \n\n

    We have made two changes in version 0.3 that are not reflected in the documentation below yet:

    \n\n\n\n

    Annotator 2D

    \n\n

    The 2d annotator can be started by

    \n\n\n\n

    The user interface of the 2d annotator looks like this:

    \n\n

    \n\n

    It contains the following elements:

    \n\n
      \n
    1. The napari layers for the image, segmentations and prompts:\n
        \n
      • box_prompts: shape layer that is used to provide box prompts to SegmentAnything.
      • \n
      • prompts: point layer that is used to provide prompts to SegmentAnything. Positive prompts (green points) for marking the object you want to segment, negative prompts (red points) for marking the outside of the object.
      • \n
      • current_object: label layer that contains the object you're currently segmenting.
      • \n
      • committed_objects: label layer with the objects that have already been segmented.
      • \n
      • auto_segmentation: label layer results from using SegmentAnything for automatic instance segmentation.
      • \n
      • raw: image layer that shows the image data.
      • \n
    2. \n
    3. The prompt menu for changing the currently selected point from positive to negative and vice versa. This can also be done by pressing t.
    4. \n
    5. The menu for automatic segmentation. Pressing Segment All Objects will run automatic segmentation. The results will be displayed in the auto_segmentation layer. Change the parameters pred iou thresh and stability score thresh to control how many objects are segmented.
    6. \n
    7. The menu for interactive segmentation. Pressing Segment Object (or s) will run segmentation for the current prompts. The result is displayed in current_object
    8. \n
    9. The menu for commiting the segmentation. When pressing Commit (or c) the result from the selected layer (either current_object or auto_segmentation) will be transferred from the respective layer to committed_objects.
    10. \n
    11. The menu for clearing the current annotations. Pressing Clear Annotations (or shift c) will clear the current annotations and the current segmentation.
    12. \n
    \n\n

    Note that point prompts and box prompts can be combined. When you're using point prompts you can only segment one object at a time. With box prompts you can segment several objects at once.

    \n\n

    Check out this video for a tutorial for the 2d annotation tool.

    \n\n

    We also provide the image series annotator, which can be used for running the 2d annotator for several images in a folder. You can start by clicking Image series annotator in the GUI, running micro_sam.image_series_annotator in the command line or from a python script.

    \n\n

    Annotator 3D

    \n\n

    The 3d annotator can be started by

    \n\n\n\n

    The user interface of the 3d annotator looks like this:

    \n\n

    \n\n

    Most elements are the same as in the 2d annotator:

    \n\n
      \n
    1. The napari layers that contain the image, segmentation and prompts. Same as for the 2d annotator but without the auto_segmentation layer.
    2. \n
    3. The prompt menu.
    4. \n
    5. The menu for interactive segmentation.
    6. \n
    7. The 3d segmentation menu. Pressing Segment All Slices (or Shift-S) will extend the segmentation for the current object across the volume.
    8. \n
    9. The menu for committing the segmentation.
    10. \n
    11. The menu for clearing the current annotations.
    12. \n
    \n\n

    Note that you can only segment one object at a time with the 3d annotator.

    \n\n

    Check out this video for a tutorial for the 3d annotation tool.

    \n\n

    Annotator Tracking

    \n\n

    The tracking annotator can be started by

    \n\n\n\n

    The user interface of the tracking annotator looks like this:

    \n\n

    \n\n

    Most elements are the same as in the 2d annotator:

    \n\n
      \n
    1. The napari layers that contain the image, segmentation and prompts. Same as for the 2d segmentation app but without the auto_segmentation layer, current_tracks and committed_tracks are the equivalent of current_object and committed_objects.
    2. \n
    3. The prompt menu.
    4. \n
    5. The menu with tracking settings: track_state is used to indicate that the object you are tracking is dividing in the current frame. track_id is used to select which of the tracks after division you are following.
    6. \n
    7. The menu for interactive segmentation.
    8. \n
    9. The tracking menu. Press Track Object (or Shift-S) to segment the current object across time.
    10. \n
    11. The menu for committing the current tracking result.
    12. \n
    13. The menu for clearing the current annotations.
    14. \n
    \n\n

    Note that the tracking annotator only supports 2d image data, volumetric data is not supported.

    \n\n

    Check out this video for a tutorial for how to use the tracking annotation tool.

    \n\n

    Tips & Tricks

    \n\n\n\n

    Known limitations

    \n\n\n\n

    How to use the Python Library

    \n\n

    The python library can be imported via

    \n\n
    \n
    import micro_sam\n
    \n
    \n\n

    The library

    \n\n\n\n

    You can import these sub-modules via

    \n\n
    \n
    import micro_sam.prompt_based_segmentation\nimport micro_sam.instance_segmentation\n# etc.\n
    \n
    \n\n

    This functionality is used to implement the interactive annotation tools and can also be used as a standalone python library.\nSome preliminary examples for how to use the python library can be found here. Check out the Submodules documentation for more details.

    \n\n

    Training your own model

    \n\n

    We reimplement the training logic described in the Segment Anything publication to enable finetuning on custom data.\nWe use this functionality to provide the finetuned microscopy models and it can also be used to finetune models on your own data.\nIn fact the best results can be expected when finetuning on your own data, and we found that it does not require much annotated training data to get siginficant improvements in model performance.\nSo a good strategy is to annotate a few images with one of the provided models using one of the interactive annotation tools and, if the annotation is not working as good as expected yet, finetune on the annotated data.\n

    \n\n

    The training logic is implemented in micro_sam.training and is based on torch-em. Please check out examples/finetuning to see how you can finetune on your own data with it. The script finetune_hela.py contains an example for finetuning on a small custom microscopy dataset and use_finetuned_model.py shows how this model can then be used in the interactive annotation tools.

    \n\n

    Since release v0.4.0 we also support training an additional decoder for automatic instance segmentation. This yields better results than the automatic mask generation of segment anything and is significantly faster.\nYou can enable training of the decoder by setting train_instance_segmentation = True here.\nThe script instance_segmentation_with_finetuned_model.py shows how to use it for automatic instance segmentation.\nWe will fully integrate this functionality with the annotation tool in the next release.

    \n\n

    More advanced examples, including quantitative and qualitative evaluation, of finetuned models can be found in finetuning, which contains the code for training and evaluating our microscopy models.

    \n\n

    Finetuned models

    \n\n

    In addition to the original Segment anything models, we provide models that finetuned on microscopy data using the functionality from micro_sam.training.\nThe models are hosted on zenodo. We currently offer the following models:

    \n\n\n\n

    See the two figures below of the improvements through the finetuned model for LM and EM data.

    \n\n

    \n\n

    \n\n

    You can select which of the models is used in the annotation tools by selecting the corresponding name from the Model Type menu:

    \n\n

    \n\n

    To use a specific model in the python library you need to pass the corresponding name as value to the model_type parameter exposed by all relevant functions.\nSee for example the 2d annotator example where use_finetuned_model can be set to True to use the vit_b_lm model.

    \n\n

    Note that we are still working on improving these models and may update them from time to time. All older models will stay available for download on zenodo, see model sources below

    \n\n

    Which model should I choose?

    \n\n

    As a rule of thumb:

    \n\n\n\n

    See also the figures above for examples where the finetuned models work better than the vanilla models.\nCurrently the model vit_h is used by default.

    \n\n

    We are working on further improving these models and adding new models for other biomedical imaging domains.

    \n\n

    Model Sources

    \n\n

    Here is an overview of all finetuned models we have released to zenodo so far:

    \n\n\n\n

    Some of these models contain multiple versions.

    \n\n

    How to contribute

    \n\n\n\n

    Discuss your ideas

    \n\n

    We welcome new contributions!

    \n\n

    First, discuss your idea by opening a new issue in micro-sam.

    \n\n

    This allows you to ask questions, and have the current developers make suggestions about the best way to implement your ideas.

    \n\n

    You may also find it helpful to look at this developer guide, which explains the organization of the micro-sam code.

    \n\n

    Clone the repository

    \n\n

    We use git for version control.

    \n\n

    Clone the repository, and checkout the development branch:

    \n\n
    git clone https://github.com/computational-cell-analytics/micro-sam.git\ncd micro-sam\ngit checkout dev\n
    \n\n

    Create your development environment

    \n\n

    We use conda to manage our environments. If you don't have this already, install miniconda or mamba to get started.

    \n\n

    Now you can create the environment, install user and develoepr dependencies, and micro-sam as an editable installation:

    \n\n
    conda env create environment-gpu.yml\nconda activate sam\npython -m pip install requirements-dev.txt\npython -m pip install -e .\n
    \n\n

    Make your changes

    \n\n

    Now it's time to make your code changes.

    \n\n

    Typically, changes are made branching off from the development branch. Checkout dev and then create a new branch to work on your changes.

    \n\n
    git checkout dev\ngit checkout -b my-new-feature\n
    \n\n

    We use google style python docstrings to create documentation for all new code.

    \n\n

    You may also find it helpful to look at this developer guide, which explains the organization of the micro-sam code.

    \n\n

    Testing

    \n\n

    Run the tests

    \n\n

    The tests for micro-sam are run with pytest

    \n\n

    To run the tests:

    \n\n
    pytest\n
    \n\n

    Writing your own tests

    \n\n

    If you have written new code, you will need to write tests to go with it.

    \n\n

    Unit tests

    \n\n

    Unit tests are the preferred style of tests for user contributions. Unit tests check small, isolated parts of the code for correctness. If your code is too complicated to write unit tests easily, you may need to consider breaking it up into smaller functions that are easier to test.

    \n\n

    Tests involving napari

    \n\n

    In cases where tests must use the napari viewer, these tips might be helpful (in particular, the make_napari_viewer_proxy fixture).

    \n\n

    These kinds of tests should be used only in limited circumstances. Developers are advised to prefer smaller unit tests, and avoid integration tests wherever possible.

    \n\n

    Code coverage

    \n\n

    Pytest uses the pytest-cov plugin to automatically determine which lines of code are covered by tests.

    \n\n

    A short summary report is printed to the terminal output whenever you run pytest. The full results are also automatically written to a file named coverage.xml.

    \n\n

    The Coverage Gutters VSCode extension is useful for visualizing which parts of the code need better test coverage. PyCharm professional has a similar feature, and you may be able to find similar tools for your preferred editor.

    \n\n

    We also use codecov.io to display the code coverage results from our Github Actions continuous integration.

    \n\n

    Open a pull request

    \n\n

    Once you've made changes to the code and written some tests to go with it, you are ready to open a pull request. You can mark your pull request as a draft if you are still working on it, and still get the benefit of discussing the best approach with maintainers.

    \n\n

    Remember that typically changes to micro-sam are made branching off from the development branch. So, you will need to open your pull request to merge back into the dev branch like this.

    \n\n

    Optional: Build the documentation

    \n\n

    We use pdoc to build the documentation.

    \n\n

    To build the documentation locally, run this command:

    \n\n
    python build_doc.py\n
    \n\n

    This will start a local server and display the HTML documentation. Any changes you make to the documentation will be updated in real time (you may need to refresh your browser to see the changes).

    \n\n

    If you want to save the HTML files, append --out to the command, like this:

    \n\n
    python build_doc.py --out\n
    \n\n

    This will save the HTML files into a new directory named tmp.

    \n\n

    You can add content to the documentation in two ways:

    \n\n
      \n
    1. By adding or updating google style python docstrings in the micro-sam code.\n
        \n
      • pdoc will automatically find and include docstrings in the documentation.
      • \n
    2. \n
    3. By adding or editing markdown files in the micro-sam doc directory.\n
        \n
      • If you add a new markdown file to the documentation, you must tell pdoc that it exists by adding a line to the micro_sam/__init__.py module docstring (eg: .. include:: ../doc/my_amazing_new_docs_page.md). Otherwise it will not be included in the final documentation build!
      • \n
    4. \n
    \n\n

    Optional: Benchmark performance

    \n\n

    There are a number of options you can use to benchmark performance, and identify problems like slow run times or high memory use in micro-sam.

    \n\n\n\n

    Run the benchmark script

    \n\n

    There is a performance benchmark script available in the micro-sam repository at development/benchmark.py.

    \n\n

    To run the benchmark script:

    \n\n
    python development/benchmark.py --model_type vit_t --device cpu`\n
    \n\n

    For more details about the user input arguments for the micro-sam benchmark script, see the help:

    \n\n
    python development/benchmark.py --help\n
    \n\n

    Line profiling

    \n\n

    For more detailed line by line performance results, we can use line-profiler.

    \n\n
    \n

    line_profiler is a module for doing line-by-line profiling of functions. kernprof is a convenient script for running either line_profiler or the Python standard library's cProfile or profile modules, depending on what is available.

    \n
    \n\n

    To do line-by-line profiling:

    \n\n
      \n
    1. Ensure you have line profiler installed: python -m pip install line_profiler
    2. \n
    3. Add @profile decorator to any function in the call stack
    4. \n
    5. Run kernprof -lv benchmark.py --model_type vit_t --device cpu
    6. \n
    \n\n

    For more details about how to use line-profiler and kernprof, see the documentation.

    \n\n

    For more details about the user input arguments for the micro-sam benchmark script, see the help:

    \n\n
    python development/benchmark.py --help\n
    \n\n

    Snakeviz visualization

    \n\n

    For more detailed visualizations of profiling results, we use snakeviz.

    \n\n
    \n

    SnakeViz is a browser based graphical viewer for the output of Python\u2019s cProfile module

    \n
    \n\n
      \n
    1. Ensure you have snakeviz installed: python -m pip install snakeviz
    2. \n
    3. Generate profile file: python -m cProfile -o program.prof benchmark.py --model_type vit_h --device cpu
    4. \n
    5. Visualize profile file: snakeviz program.prof
    6. \n
    \n\n

    For more details about how to use snakeviz, see the documentation.

    \n\n

    Memory profiling with memray

    \n\n

    If you need to investigate memory use specifically, we use memray.

    \n\n
    \n

    Memray is a memory profiler for Python. It can track memory allocations in Python code, in native extension modules, and in the Python interpreter itself. It can generate several different types of reports to help you analyze the captured memory usage data. While commonly used as a CLI tool, it can also be used as a library to perform more fine-grained profiling tasks.

    \n
    \n\n

    For more details about how to use memray, see the documentation.

    \n\n

    For Developers

    \n\n

    This software consists of four different python (sub-)modules:

    \n\n\n\n

    Annotation Tools

    \n\n

    The annotation tools are implemented as napari plugins.

    \n\n

    There are four annotation tools:

    \n\n\n\n

    An overview of the functionality of the different tools:

    \n\n\n\n\n \n \n \n \n\n\n\n\n \n \n \n \n\n\n \n \n \n \n\n\n \n \n \n \n\n\n \n \n \n \n\n\n \n \n \n \n\n\n
    Functionalityannotator_2dannotator_3dannotator_tracking
    Interactive segmentationYesYesYes
    Interactive segmentation for multiple objects at a timeYesNoNo
    Interactive 3d segmentation via projectionNoYesYes
    Support for dividing objectsNoNoYes
    Automatic segmentationYesYesNo
    \n\n

    The functionality for image_series_annotator is not listed because it is identical with the functionality of annotator_2d.

    \n\n

    Each tool implements the follwing core logic:

    \n\n
      \n
    1. The image embeddings (prediction from SAM image encoder) are pre-computed for the input data (2d image, image volume or timeseries). These embeddings can be cached to a zarr file.
    2. \n
    3. Interactive (and automatic) segmentation functionality is implemented by a UI based on napari and magicgui functionality.
    4. \n
    \n\n

    Each tool has three different entry points:

    \n\n\n\n

    Each tool is implemented in its own submodule, e.g. micro_sam.sam_annotator.annotator_2d.\nThe napari plugin is implemented by a class, e.g. micro_sam.sam_annotator.annotator_2d:Annotator2d, inheriting from micro_sam.sam_annotator._annotator._AnnotatorBase. This class implements the core logic for the plugins.\nThe concrete annotation tools are instantiated by passing widgets from micro_sam.sam_annotator._widgets to it, \nwhich implement the interactive segmentation in 2d, 3d etc.\nThese plugins are designed so that image embeddings can be computed for user-specified image layers in napari.

    \n\n

    The function and CLI entry points are implemented by micro_sam.sam_annotator.annotator_2d:annotator_2d (and corresponding functions for the other annotation tools). They are called with image data, precompute the embeddings for that image and start a napari viewer with this image and the annotation plugin.

    \n\n

    \n\n

    \n\n

    Using micro_sam on BAND

    \n\n

    BAND is a service offered by EMBL Heidelberg that gives access to a virtual desktop for image analysis tasks. It is free to use and micro_sam is installed there.\nIn order to use BAND and start micro_sam on it follow these steps:

    \n\n

    Start BAND

    \n\n\n\n

    \"image\"

    \n\n

    Start micro_sam in BAND

    \n\n\n\n

    Transfering data to BAND

    \n\n

    To copy data to and from BAND you can use any cloud storage, e.g. ownCloud, dropbox or google drive. For this, it's important to note that copy and paste, which you may need for accessing links on BAND, works a bit different in BAND:

    \n\n\n\n

    The video below shows how to copy over a link from owncloud and then download the data on BAND using copy and paste:

    \n\n

    https://github.com/computational-cell-analytics/micro-sam/assets/4263537/825bf86e-017e-41fc-9e42-995d21203287

    \n"}, {"fullname": "micro_sam.bioimageio", "modulename": "micro_sam.bioimageio", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.model_export", "modulename": "micro_sam.bioimageio.model_export", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.model_export.DEFAULTS", "modulename": "micro_sam.bioimageio.model_export", "qualname": "DEFAULTS", "kind": "variable", "doc": "

    \n", "default_value": "{'authors': [Author(affiliation='University Goettingen', email=None, orcid=None, name='Anwai Archit', github_user='anwai98'), Author(affiliation='University Goettingen', email=None, orcid=None, name='Constantin Pape', github_user='constantinpape')], 'description': 'Finetuned Segment Anything Model for Microscopy', 'cite': [CiteEntry(text='Archit et al. Segment Anything for Microscopy', doi='10.1101/2023.08.21.554208', url=None)], 'tags': ['segment-anything', 'instance-segmentation']}"}, {"fullname": "micro_sam.bioimageio.model_export.export_sam_model", "modulename": "micro_sam.bioimageio.model_export", "qualname": "export_sam_model", "kind": "function", "doc": "

    Export SAM model to BioImage.IO model format.

    \n\n

    The exported model can be uploaded to bioimage.io and\nbe used in tools that support the BioImage.IO model format.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\timage: numpy.ndarray,\tlabel_image: numpy.ndarray,\tmodel_type: str,\tname: str,\toutput_path: Union[str, os.PathLike],\tcheckpoint_path: Union[str, os.PathLike, NoneType] = None,\t**kwargs) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor", "modulename": "micro_sam.bioimageio.predictor_adaptor", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor", "kind": "class", "doc": "

    Wrapper around the SamPredictor.

    \n\n

    This model supports the same functionality as SamPredictor and can provide mask segmentations\nfrom box, point or mask input prompts.

    \n\n
    Arguments:
    \n\n\n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.__init__", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.__init__", "kind": "function", "doc": "

    Initializes internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(model_type: str)"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.sam", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.sam", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.load_state_dict", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.load_state_dict", "kind": "function", "doc": "

    Copies parameters and buffers from state_dict into\nthis module and its descendants. If strict is True, then\nthe keys of state_dict must exactly match the keys returned\nby this module's ~torch.nn.Module.state_dict() function.

    \n\n
    \n\n

    If assign is True the optimizer must be created after\nthe call to load_state_dict.

    \n\n
    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    NamedTuple with missing_keys and unexpected_keys fields:\n * missing_keys is a list of str containing the missing keys\n * unexpected_keys is a list of str containing the unexpected keys

    \n
    \n\n
    Note:
    \n\n
    \n

    If a parameter or buffer is registered as None and its corresponding key\n exists in state_dict, load_state_dict() will raise a\n RuntimeError.

    \n
    \n", "signature": "(self, state):", "funcdef": "def"}, {"fullname": "micro_sam.bioimageio.predictor_adaptor.PredictorAdaptor.forward", "modulename": "micro_sam.bioimageio.predictor_adaptor", "qualname": "PredictorAdaptor.forward", "kind": "function", "doc": "
    Arguments:
    \n\n\n\n

    Returns:

    \n", "signature": "(\tself,\timage: torch.Tensor,\tbox_prompts: Optional[torch.Tensor] = None,\tpoint_prompts: Optional[torch.Tensor] = None,\tpoint_labels: Optional[torch.Tensor] = None,\tmask_prompts: Optional[torch.Tensor] = None,\tembeddings: Optional[torch.Tensor] = None) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation", "modulename": "micro_sam.evaluation", "kind": "module", "doc": "

    Functionality for evaluating Segment Anything models on microscopy data.

    \n"}, {"fullname": "micro_sam.evaluation.evaluation", "modulename": "micro_sam.evaluation.evaluation", "kind": "module", "doc": "

    Evaluation functionality for segmentation predictions from micro_sam.evaluation.automatic_mask_generation\nand micro_sam.evaluation.inference.

    \n"}, {"fullname": "micro_sam.evaluation.evaluation.run_evaluation", "modulename": "micro_sam.evaluation.evaluation", "qualname": "run_evaluation", "kind": "function", "doc": "

    Run evaluation for instance segmentation predictions.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    A DataFrame that contains the evaluation results.

    \n
    \n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprediction_paths: List[Union[str, os.PathLike]],\tsave_path: Union[str, os.PathLike, NoneType] = None,\tverbose: bool = True) -> pandas.core.frame.DataFrame:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.evaluation.run_evaluation_for_iterative_prompting", "modulename": "micro_sam.evaluation.evaluation", "qualname": "run_evaluation_for_iterative_prompting", "kind": "function", "doc": "

    Run evaluation for iterative prompt-based segmentation predictions.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    A DataFrame that contains the evaluation results.

    \n
    \n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprediction_root: Union[os.PathLike, str],\texperiment_folder: Union[os.PathLike, str],\tstart_with_box_prompt: bool = False,\toverwrite_results: bool = False) -> pandas.core.frame.DataFrame:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments", "modulename": "micro_sam.evaluation.experiments", "kind": "module", "doc": "

    Predefined experiment settings for experiments with different prompt strategies.

    \n"}, {"fullname": "micro_sam.evaluation.experiments.ExperimentSetting", "modulename": "micro_sam.evaluation.experiments", "qualname": "ExperimentSetting", "kind": "variable", "doc": "

    \n", "default_value": "typing.Dict"}, {"fullname": "micro_sam.evaluation.experiments.full_experiment_settings", "modulename": "micro_sam.evaluation.experiments", "qualname": "full_experiment_settings", "kind": "function", "doc": "

    The full experiment settings.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The list of experiment settings.

    \n
    \n", "signature": "(\tuse_boxes: bool = False,\tpositive_range: Optional[List[int]] = None,\tnegative_range: Optional[List[int]] = None) -> List[Dict]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments.default_experiment_settings", "modulename": "micro_sam.evaluation.experiments", "qualname": "default_experiment_settings", "kind": "function", "doc": "

    The three default experiment settings.

    \n\n

    For the default experiments we use a single positive prompt,\ntwo positive and four negative prompts and box prompts.

    \n\n
    Returns:
    \n\n
    \n

    The list of experiment settings.

    \n
    \n", "signature": "() -> List[Dict]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.experiments.get_experiment_setting_name", "modulename": "micro_sam.evaluation.experiments", "qualname": "get_experiment_setting_name", "kind": "function", "doc": "

    Get the name for the given experiment setting.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The name for this experiment setting.

    \n
    \n", "signature": "(setting: Dict) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference", "modulename": "micro_sam.evaluation.inference", "kind": "module", "doc": "

    Inference with Segment Anything models and different prompt strategies.

    \n"}, {"fullname": "micro_sam.evaluation.inference.precompute_all_embeddings", "modulename": "micro_sam.evaluation.inference", "qualname": "precompute_all_embeddings", "kind": "function", "doc": "

    Precompute all image embeddings.

    \n\n

    To enable running different inference tasks in parallel afterwards.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.precompute_all_prompts", "modulename": "micro_sam.evaluation.inference", "qualname": "precompute_all_prompts", "kind": "function", "doc": "

    Precompute all point prompts.

    \n\n

    To enable running different inference tasks in parallel afterwards.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tgt_paths: List[Union[str, os.PathLike]],\tprompt_save_dir: Union[str, os.PathLike],\tprompt_settings: List[Dict[str, Any]]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_inference_with_prompts", "modulename": "micro_sam.evaluation.inference", "qualname": "run_inference_with_prompts", "kind": "function", "doc": "

    Run segment anything inference for multiple images using prompts derived from groundtruth.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage_paths: List[Union[str, os.PathLike]],\tgt_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tuse_points: bool,\tuse_boxes: bool,\tn_positives: int,\tn_negatives: int,\tdilation: int = 5,\tprompt_save_dir: Union[str, os.PathLike, NoneType] = None,\tbatch_size: int = 512) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_inference_with_iterative_prompting", "modulename": "micro_sam.evaluation.inference", "qualname": "run_inference_with_iterative_prompting", "kind": "function", "doc": "

    Run segment anything inference for multiple images using prompts iteratively\n derived from model outputs and groundtruth

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage_paths: List[Union[str, os.PathLike]],\tgt_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tstart_with_box_prompt: bool,\tdilation: int = 5,\tbatch_size: int = 32,\tn_iterations: int = 8,\tuse_masks: bool = False) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_amg", "modulename": "micro_sam.evaluation.inference", "qualname": "run_amg", "kind": "function", "doc": "

    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tval_image_paths: List[Union[str, os.PathLike]],\tval_gt_paths: List[Union[str, os.PathLike]],\ttest_image_paths: List[Union[str, os.PathLike]],\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.inference.run_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.inference", "qualname": "run_instance_segmentation_with_decoder", "kind": "function", "doc": "

    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tval_image_paths: List[Union[str, os.PathLike]],\tval_gt_paths: List[Union[str, os.PathLike]],\ttest_image_paths: List[Union[str, os.PathLike]]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation", "modulename": "micro_sam.evaluation.instance_segmentation", "kind": "module", "doc": "

    Inference and evaluation for the automatic instance segmentation functionality.

    \n"}, {"fullname": "micro_sam.evaluation.instance_segmentation.default_grid_search_values_amg", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "default_grid_search_values_amg", "kind": "function", "doc": "

    Default grid-search parameter for AMG-based instance segmentation.

    \n\n

    Return grid search values for the two most important parameters:

    \n\n\n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The values for grid search.

    \n
    \n", "signature": "(\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None) -> Dict[str, List[float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.default_grid_search_values_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "default_grid_search_values_instance_segmentation_with_decoder", "kind": "function", "doc": "

    Default grid-search parameter for decoder-based instance segmentation.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The values for grid search.

    \n
    \n", "signature": "(\tcenter_distance_threshold_values: Optional[List[float]] = None,\tboundary_distance_threshold_values: Optional[List[float]] = None,\tdistance_smoothing_values: Optional[List[float]] = None,\tmin_size_values: Optional[List[float]] = None) -> Dict[str, List[float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_grid_search", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_grid_search", "kind": "function", "doc": "

    Run grid search for automatic mask generation.

    \n\n

    The parameters and their respective value ranges for the grid search are specified via the\n'grid_search_values' argument. For example, to run a grid search over the parameters 'pred_iou_thresh'\nand 'stability_score_thresh', you can pass the following:

    \n\n
    grid_search_values = {\n    \"pred_iou_thresh\": [0.6, 0.7, 0.8, 0.9],\n    \"stability_score_thresh\": [0.6, 0.7, 0.8, 0.9],\n}\n
    \n\n

    All combinations of the parameters will be checked.

    \n\n

    You can use the functions default_grid_search_values_instance_segmentation_with_decoder\nor default_grid_search_values_amg to get the default grid search parameters for the two\nrespective instance segmentation methods.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tsegmenter: Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder],\tgrid_search_values: Dict[str, List],\timage_paths: List[Union[str, os.PathLike]],\tgt_paths: List[Union[str, os.PathLike]],\tresult_dir: Union[str, os.PathLike],\tembedding_dir: Union[str, os.PathLike, NoneType],\tfixed_generate_kwargs: Optional[Dict[str, Any]] = None,\tverbose_gs: bool = False,\timage_key: Optional[str] = None,\tgt_key: Optional[str] = None,\trois: Optional[Tuple[slice, ...]] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_inference", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_inference", "kind": "function", "doc": "

    Run inference for automatic mask generation.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tsegmenter: Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder],\timage_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tgenerate_kwargs: Optional[Dict[str, Any]] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.evaluate_instance_segmentation_grid_search", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "evaluate_instance_segmentation_grid_search", "kind": "function", "doc": "

    Evaluate gridsearch results.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The best parameter setting.\n The evaluation score for the best setting.

    \n
    \n", "signature": "(\tresult_dir: Union[str, os.PathLike],\tgrid_search_parameters: List[str],\tcriterion: str = 'mSA') -> Tuple[Dict[str, Any], float]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.save_grid_search_best_params", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "save_grid_search_best_params", "kind": "function", "doc": "

    \n", "signature": "(best_kwargs, best_msa, grid_search_result_dir=None):", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.instance_segmentation.run_instance_segmentation_grid_search_and_inference", "modulename": "micro_sam.evaluation.instance_segmentation", "qualname": "run_instance_segmentation_grid_search_and_inference", "kind": "function", "doc": "

    Run grid search and inference for automatic mask generation.

    \n\n

    Please refer to the documentation of run_instance_segmentation_grid_search\nfor details on how to specify the grid search parameters.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tsegmenter: Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder],\tgrid_search_values: Dict[str, List],\tval_image_paths: List[Union[str, os.PathLike]],\tval_gt_paths: List[Union[str, os.PathLike]],\ttest_image_paths: List[Union[str, os.PathLike]],\tembedding_dir: Union[str, os.PathLike],\tprediction_dir: Union[str, os.PathLike],\tresult_dir: Union[str, os.PathLike],\tfixed_generate_kwargs: Optional[Dict[str, Any]] = None,\tverbose_gs: bool = True) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell", "modulename": "micro_sam.evaluation.livecell", "kind": "module", "doc": "

    Inference and evaluation for the LIVECell dataset and\nthe different cell lines contained in it.

    \n"}, {"fullname": "micro_sam.evaluation.livecell.CELL_TYPES", "modulename": "micro_sam.evaluation.livecell", "qualname": "CELL_TYPES", "kind": "variable", "doc": "

    \n", "default_value": "['A172', 'BT474', 'BV2', 'Huh7', 'MCF7', 'SHSY5Y', 'SkBr3', 'SKOV3']"}, {"fullname": "micro_sam.evaluation.livecell.livecell_inference", "modulename": "micro_sam.evaluation.livecell", "qualname": "livecell_inference", "kind": "function", "doc": "

    Run inference for livecell with a fixed prompt setting.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tuse_points: bool,\tuse_boxes: bool,\tn_positives: Optional[int] = None,\tn_negatives: Optional[int] = None,\tprompt_folder: Union[os.PathLike, str, NoneType] = None,\tpredictor: Optional[segment_anything.predictor.SamPredictor] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_precompute_embeddings", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_precompute_embeddings", "kind": "function", "doc": "

    Run precomputation of val and test image embeddings for livecell.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tn_val_per_cell_type: int = 25) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_iterative_prompting", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_iterative_prompting", "kind": "function", "doc": "

    Run inference on livecell with iterative prompting setting.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tstart_with_box: bool = False,\tuse_masks: bool = False) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_amg", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_amg", "kind": "function", "doc": "

    Run automatic mask generation grid-search and inference for livecell.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path where the predicted images are stored.

    \n
    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tiou_thresh_values: Optional[List[float]] = None,\tstability_score_values: Optional[List[float]] = None,\tverbose_gs: bool = False,\tn_val_per_cell_type: int = 25) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_instance_segmentation_with_decoder", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_instance_segmentation_with_decoder", "kind": "function", "doc": "

    Run automatic mask generation grid-search and inference for livecell.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path where the predicted images are stored.

    \n
    \n", "signature": "(\tcheckpoint: Union[str, os.PathLike],\tinput_folder: Union[str, os.PathLike],\tmodel_type: str,\texperiment_folder: Union[str, os.PathLike],\tcenter_distance_threshold_values: Optional[List[float]] = None,\tboundary_distance_threshold_values: Optional[List[float]] = None,\tdistance_smoothing_values: Optional[List[float]] = None,\tmin_size_values: Optional[List[float]] = None,\tverbose_gs: bool = False,\tn_val_per_cell_type: int = 25) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_inference", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_inference", "kind": "function", "doc": "

    Run LIVECell inference with command line tool.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.livecell.run_livecell_evaluation", "modulename": "micro_sam.evaluation.livecell", "qualname": "run_livecell_evaluation", "kind": "function", "doc": "

    Run LiveCELL evaluation with command line tool.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "kind": "module", "doc": "

    Functionality for qualitative comparison of Segment Anything models on microscopy data.

    \n"}, {"fullname": "micro_sam.evaluation.model_comparison.generate_data_for_model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "qualname": "generate_data_for_model_comparison", "kind": "function", "doc": "

    Generate samples for qualitative model comparison.

    \n\n

    This precomputes the input for model_comparison and model_comparison_with_napari.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tloader: torch.utils.data.dataloader.DataLoader,\toutput_folder: Union[str, os.PathLike],\tmodel_type1: str,\tmodel_type2: str,\tn_samples: int,\tmodel_type3: Optional[str] = None,\tcheckpoint1: Union[str, os.PathLike, NoneType] = None,\tcheckpoint2: Union[str, os.PathLike, NoneType] = None,\tcheckpoint3: Union[str, os.PathLike, NoneType] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.model_comparison.model_comparison", "modulename": "micro_sam.evaluation.model_comparison", "qualname": "model_comparison", "kind": "function", "doc": "

    Create images for a qualitative model comparision.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\toutput_folder: Union[str, os.PathLike],\tn_images_per_sample: int,\tmin_size: int,\tplot_folder: Union[str, os.PathLike, NoneType] = None,\tpoint_radius: int = 4,\toutline_dilation: int = 0,\thave_model3=False) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.model_comparison.model_comparison_with_napari", "modulename": "micro_sam.evaluation.model_comparison", "qualname": "model_comparison_with_napari", "kind": "function", "doc": "

    Use napari to display the qualtiative comparison results for two models.

    \n\n
    Arguments:
    \n\n\n", "signature": "(output_folder: Union[str, os.PathLike], show_points: bool = True) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation.default_grid_search_values_multi_dimensional_segmentation", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "qualname": "default_grid_search_values_multi_dimensional_segmentation", "kind": "function", "doc": "

    Default grid-search parameters for multi-dimensional prompt-based instance segmentation.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The values for grid search.

    \n
    \n", "signature": "(\tiou_threshold_values: Optional[List[float]] = None,\tprojection_method_values: Union[str, dict, NoneType] = None,\tbox_extension_values: Union[float, int, NoneType] = None) -> Dict[str, List]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation.segment_slices_from_ground_truth", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "qualname": "segment_slices_from_ground_truth", "kind": "function", "doc": "

    Segment all objects in a volume by prompt-based segmentation in one slice per object.

    \n\n

    This function first segments each object in the respective specified slice using interactive\n(prompt-based) segmentation functionality. Then it segments the particular object in the\nremaining slices in the volume.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tvolume: numpy.ndarray,\tground_truth: numpy.ndarray,\tmodel_type: str,\tcheckpoint_path: Union[str, os.PathLike],\tembedding_path: Union[str, os.PathLike],\tiou_threshold: float = 0.8,\tprojection: Union[str, dict] = 'mask',\tbox_extension: Union[float, int] = 0.025,\tdevice: Union[str, torch.device] = None,\tinteractive_seg_mode: str = 'box',\tverbose: bool = False,\treturn_segmentation: bool = False,\tmin_size: int = 0) -> Union[float, Tuple[numpy.ndarray, float]]:", "funcdef": "def"}, {"fullname": "micro_sam.evaluation.multi_dimensional_segmentation.run_multi_dimensional_segmentation_grid_search", "modulename": "micro_sam.evaluation.multi_dimensional_segmentation", "qualname": "run_multi_dimensional_segmentation_grid_search", "kind": "function", "doc": "

    Run grid search for prompt-based multi-dimensional instance segmentation.

    \n\n

    The parameters and their respective value ranges for the grid search are specified via the\ngrid_search_values argument. For example, to run a grid search over the parameters iou_threshold,\nprojection and box_extension, you can pass the following:

    \n\n
    grid_search_values = {\n    \"iou_threshold\": [0.5, 0.6, 0.7, 0.8, 0.9],\n    \"projection\": [\"mask\", \"bounding_box\", \"points\"],\n    \"box_extension\": [0, 0.1, 0.2, 0.3, 0.4, 0,5],\n}\n
    \n\n

    All combinations of the parameters will be checked.\nIf passed None, the function default_grid_search_values_multi_dimensional_segmentation is used\nto get the default grid search parameters for the instance segmentation method.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tvolume: numpy.ndarray,\tground_truth: numpy.ndarray,\tmodel_type: str,\tcheckpoint_path: Union[str, os.PathLike],\tembedding_path: Union[str, os.PathLike],\tresult_dir: Union[str, os.PathLike],\tinteractive_seg_mode: str = 'box',\tverbose: bool = False,\tgrid_search_values: Optional[Dict[str, List]] = None,\tmin_size: int = 0):", "funcdef": "def"}, {"fullname": "micro_sam.inference", "modulename": "micro_sam.inference", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.inference.batched_inference", "modulename": "micro_sam.inference", "qualname": "batched_inference", "kind": "function", "doc": "

    Run batched inference for input prompts.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The predicted segmentation masks.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\timage: numpy.ndarray,\tbatch_size: int,\tboxes: Optional[numpy.ndarray] = None,\tpoints: Optional[numpy.ndarray] = None,\tpoint_labels: Optional[numpy.ndarray] = None,\tmultimasking: bool = False,\tembedding_path: Union[str, os.PathLike, NoneType] = None,\treturn_instance_segmentation: bool = True,\tsegmentation_ids: Optional[list] = None,\treduce_multimasking: bool = True,\tlogits_masks: Optional[torch.Tensor] = None):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation", "modulename": "micro_sam.instance_segmentation", "kind": "module", "doc": "

    Automated instance segmentation functionality.\nThe classes implemented here extend the automatic instance segmentation from Segment Anything:\nhttps://computational-cell-analytics.github.io/micro-sam/micro_sam.html

    \n"}, {"fullname": "micro_sam.instance_segmentation.mask_data_to_segmentation", "modulename": "micro_sam.instance_segmentation", "qualname": "mask_data_to_segmentation", "kind": "function", "doc": "

    Convert the output of the automatic mask generation to an instance segmentation.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The instance segmentation.

    \n
    \n", "signature": "(\tmasks: List[Dict[str, Any]],\twith_background: bool,\tmin_object_size: int = 0,\tmax_object_size: Optional[int] = None) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase", "kind": "class", "doc": "

    Base class for the automatic mask generators.

    \n", "bases": "abc.ABC"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.is_initialized", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.is_initialized", "kind": "variable", "doc": "

    Whether the mask generator has already been initialized.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.crop_list", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.crop_list", "kind": "variable", "doc": "

    The list of mask data after initialization.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.crop_boxes", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.crop_boxes", "kind": "variable", "doc": "

    The list of crop boxes.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.original_size", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.original_size", "kind": "variable", "doc": "

    The original image size.

    \n"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.get_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.get_state", "kind": "function", "doc": "

    Get the initialized state of the mask generator.

    \n\n
    Returns:
    \n\n
    \n

    State of the mask generator.

    \n
    \n", "signature": "(self) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.set_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.set_state", "kind": "function", "doc": "

    Set the state of the mask generator.

    \n\n
    Arguments:
    \n\n\n", "signature": "(self, state: Dict[str, Any]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AMGBase.clear_state", "modulename": "micro_sam.instance_segmentation", "qualname": "AMGBase.clear_state", "kind": "function", "doc": "

    Clear the state of the mask generator.

    \n", "signature": "(self):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator", "kind": "class", "doc": "

    Generates an instance segmentation without prompts, using a point grid.

    \n\n

    This class implements the same logic as\nhttps://github.com/facebookresearch/segment-anything/blob/main/segment_anything/automatic_mask_generator.py\nIt decouples the computationally expensive steps of generating masks from the cheap post-processing operation\nto filter these masks to enable grid search and interactively changing the post-processing.

    \n\n

    Use this class as follows:

    \n\n
    \n
    amg = AutomaticMaskGenerator(predictor)\namg.initialize(image)  # Initialize the masks, this takes care of all expensive computations.\nmasks = amg.generate(pred_iou_thresh=0.8)  # Generate the masks. This is fast and enables testing parameters\n
    \n
    \n\n
    Arguments:
    \n\n\n", "bases": "AMGBase"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tpoints_per_side: Optional[int] = 32,\tpoints_per_batch: Optional[int] = None,\tcrop_n_layers: int = 0,\tcrop_overlap_ratio: float = 0.3413333333333333,\tcrop_n_points_downscale_factor: int = 1,\tpoint_grids: Optional[List[numpy.ndarray]] = None,\tstability_score_offset: float = 1.0)"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator.initialize", "kind": "function", "doc": "

    Initialize image embeddings and masks for an image.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tverbose: bool = False,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.AutomaticMaskGenerator.generate", "modulename": "micro_sam.instance_segmentation", "qualname": "AutomaticMaskGenerator.generate", "kind": "function", "doc": "

    Generate instance segmentation for the currently initialized image.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The instance segmentation masks.

    \n
    \n", "signature": "(\tself,\tpred_iou_thresh: float = 0.88,\tstability_score_thresh: float = 0.95,\tbox_nms_thresh: float = 0.7,\tcrop_nms_thresh: float = 0.7,\tmin_mask_region_area: int = 0,\toutput_mode: str = 'binary_mask') -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.TiledAutomaticMaskGenerator", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledAutomaticMaskGenerator", "kind": "class", "doc": "

    Generates an instance segmentation without prompts, using a point grid.

    \n\n

    Implements the same functionality as AutomaticMaskGenerator but for tiled embeddings.

    \n\n
    Arguments:
    \n\n\n", "bases": "AutomaticMaskGenerator"}, {"fullname": "micro_sam.instance_segmentation.TiledAutomaticMaskGenerator.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledAutomaticMaskGenerator.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tpoints_per_side: Optional[int] = 32,\tpoints_per_batch: int = 64,\tpoint_grids: Optional[List[numpy.ndarray]] = None,\tstability_score_offset: float = 1.0)"}, {"fullname": "micro_sam.instance_segmentation.TiledAutomaticMaskGenerator.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledAutomaticMaskGenerator.initialize", "kind": "function", "doc": "

    Initialize image embeddings and masks for an image.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = False,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter", "kind": "class", "doc": "

    Adapter to contain the UNETR decoder in a single module.

    \n\n

    To apply the decoder on top of pre-computed embeddings for\nthe segmentation functionality.\nSee also: https://github.com/constantinpape/torch-em/blob/main/torch_em/model/unetr.py

    \n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.__init__", "kind": "function", "doc": "

    Initializes internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(unetr)"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.base", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.base", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.out_conv", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.out_conv", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv_out", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv_out", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.decoder_head", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.decoder_head", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.final_activation", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.final_activation", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.postprocess_masks", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.postprocess_masks", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.decoder", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv1", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv1", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv2", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv2", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv3", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv3", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.deconv4", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.deconv4", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.instance_segmentation.DecoderAdapter.forward", "modulename": "micro_sam.instance_segmentation", "qualname": "DecoderAdapter.forward", "kind": "function", "doc": "

    Defines the computation performed at every call.

    \n\n

    Should be overridden by all subclasses.

    \n\n
    \n\n

    Although the recipe for forward pass needs to be defined within\nthis function, one should call the Module instance afterwards\ninstead of this since the former takes care of running the\nregistered hooks while the latter silently ignores them.

    \n\n
    \n", "signature": "(self, input_, input_shape, original_shape):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_unetr", "modulename": "micro_sam.instance_segmentation", "qualname": "get_unetr", "kind": "function", "doc": "

    Get UNETR model for automatic instance segmentation.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The UNETR model.

    \n
    \n", "signature": "(\timage_encoder: torch.nn.modules.module.Module,\tdecoder_state: Optional[collections.OrderedDict[str, torch.Tensor]] = None,\tdevice: Union[str, torch.device, NoneType] = None) -> torch.nn.modules.module.Module:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "get_decoder", "kind": "function", "doc": "

    Get decoder to predict outputs for automatic instance segmentation

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The decoder for instance segmentation.

    \n
    \n", "signature": "(\timage_encoder: torch.nn.modules.module.Module,\tdecoder_state: collections.OrderedDict[str, torch.Tensor],\tdevice: Union[str, torch.device, NoneType] = None) -> micro_sam.instance_segmentation.DecoderAdapter:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_predictor_and_decoder", "modulename": "micro_sam.instance_segmentation", "qualname": "get_predictor_and_decoder", "kind": "function", "doc": "

    Load the SAM model (predictor) and instance segmentation decoder.

    \n\n

    This requires a checkpoint that contains the state for both predictor\nand decoder.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The SAM predictor.\n The decoder for instance segmentation.

    \n
    \n", "signature": "(\tmodel_type: str,\tcheckpoint_path: Union[str, os.PathLike],\tdevice: Union[str, torch.device, NoneType] = None) -> Tuple[segment_anything.predictor.SamPredictor, micro_sam.instance_segmentation.DecoderAdapter]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder", "kind": "class", "doc": "

    Generates an instance segmentation without prompts, using a decoder.

    \n\n

    Implements the same interface as AutomaticMaskGenerator.

    \n\n

    Use this class as follows:

    \n\n
    \n
    segmenter = InstanceSegmentationWithDecoder(predictor, decoder)\nsegmenter.initialize(image)   # Predict the image embeddings and decoder outputs.\nmasks = segmenter.generate(center_distance_threshold=0.75)  # Generate the instance segmentation.\n
    \n
    \n\n
    Arguments:
    \n\n\n"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.__init__", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tdecoder: torch.nn.modules.module.Module)"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.is_initialized", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.is_initialized", "kind": "variable", "doc": "

    Whether the mask generator has already been initialized.

    \n"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.initialize", "kind": "function", "doc": "

    Initialize image embeddings and decoder predictions for an image.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tverbose: bool = False,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.generate", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.generate", "kind": "function", "doc": "

    Generate instance segmentation for the currently initialized image.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The instance segmentation masks.

    \n
    \n", "signature": "(\tself,\tcenter_distance_threshold: float = 0.5,\tboundary_distance_threshold: float = 0.5,\tforeground_threshold: float = 0.5,\tforeground_smoothing: float = 1.0,\tdistance_smoothing: float = 1.6,\tmin_size: int = 0,\toutput_mode: Optional[str] = 'binary_mask') -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.get_state", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.get_state", "kind": "function", "doc": "

    Get the initialized state of the instance segmenter.

    \n\n
    Returns:
    \n\n
    \n

    Instance segmentation state.

    \n
    \n", "signature": "(self) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.set_state", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.set_state", "kind": "function", "doc": "

    Set the state of the instance segmenter.

    \n\n
    Arguments:
    \n\n\n", "signature": "(self, state: Dict[str, Any]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.InstanceSegmentationWithDecoder.clear_state", "modulename": "micro_sam.instance_segmentation", "qualname": "InstanceSegmentationWithDecoder.clear_state", "kind": "function", "doc": "

    Clear the state of the instance segmenter.

    \n", "signature": "(self):", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.TiledInstanceSegmentationWithDecoder", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledInstanceSegmentationWithDecoder", "kind": "class", "doc": "

    Same as InstanceSegmentationWithDecoder but for tiled image embeddings.

    \n", "bases": "InstanceSegmentationWithDecoder"}, {"fullname": "micro_sam.instance_segmentation.TiledInstanceSegmentationWithDecoder.initialize", "modulename": "micro_sam.instance_segmentation", "qualname": "TiledInstanceSegmentationWithDecoder.initialize", "kind": "function", "doc": "

    Initialize image embeddings and decoder predictions for an image.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tself,\timage: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = False,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.instance_segmentation.get_amg", "modulename": "micro_sam.instance_segmentation", "qualname": "get_amg", "kind": "function", "doc": "

    Get the automatic mask generator class.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The automatic mask generator.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tis_tiled: bool,\tdecoder: Optional[torch.nn.modules.module.Module] = None,\t**kwargs) -> Union[micro_sam.instance_segmentation.AMGBase, micro_sam.instance_segmentation.InstanceSegmentationWithDecoder]:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation", "modulename": "micro_sam.multi_dimensional_segmentation", "kind": "module", "doc": "

    Multi-dimensional segmentation with segment anything.

    \n"}, {"fullname": "micro_sam.multi_dimensional_segmentation.PROJECTION_MODES", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "PROJECTION_MODES", "kind": "variable", "doc": "

    \n", "default_value": "('box', 'mask', 'points', 'points_and_mask', 'single_point')"}, {"fullname": "micro_sam.multi_dimensional_segmentation.segment_mask_in_volume", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "segment_mask_in_volume", "kind": "function", "doc": "

    Segment an object mask in in volumetric data.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    Array with the volumetric segmentation.\n Tuple with the first and last segmented slice.

    \n
    \n", "signature": "(\tsegmentation: numpy.ndarray,\tpredictor: segment_anything.predictor.SamPredictor,\timage_embeddings: Dict[str, Any],\tsegmented_slices: numpy.ndarray,\tstop_lower: bool,\tstop_upper: bool,\tiou_threshold: float,\tprojection: Union[str, dict],\tupdate_progress: Optional[<built-in function callable>] = None,\tbox_extension: float = 0.0,\tverbose: bool = False) -> Tuple[numpy.ndarray, Tuple[int, int]]:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation.merge_instance_segmentation_3d", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "merge_instance_segmentation_3d", "kind": "function", "doc": "

    Merge stacked 2d instance segmentations into a consistent 3d segmentation.

    \n\n

    Solves a multicut problem based on the overlap of objects to merge across z.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The merged segmentation.

    \n
    \n", "signature": "(\tslice_segmentation: numpy.ndarray,\tbeta: float = 0.5,\twith_background: bool = True,\tgap_closing: Optional[int] = None,\tmin_z_extent: Optional[int] = None,\tverbose: bool = True,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.multi_dimensional_segmentation.automatic_3d_segmentation", "modulename": "micro_sam.multi_dimensional_segmentation", "qualname": "automatic_3d_segmentation", "kind": "function", "doc": "

    Segment volume in 3d.

    \n\n

    First segments slices individually in 2d and then merges them across 3d\nbased on overlap of objects between slices.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The segmentation.

    \n
    \n", "signature": "(\tvolume: numpy.ndarray,\tpredictor: segment_anything.predictor.SamPredictor,\tsegmentor: micro_sam.instance_segmentation.AMGBase,\tembedding_path: Union[os.PathLike, str, NoneType] = None,\twith_background: bool = True,\tgap_closing: Optional[int] = None,\tmin_z_extent: Optional[int] = None,\tverbose: bool = True,\t**kwargs) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state", "modulename": "micro_sam.precompute_state", "kind": "module", "doc": "

    Precompute image embeddings and automatic mask generator state for image data.

    \n"}, {"fullname": "micro_sam.precompute_state.cache_amg_state", "modulename": "micro_sam.precompute_state", "qualname": "cache_amg_state", "kind": "function", "doc": "

    Compute and cache or load the state for the automatic mask generator.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The automatic mask generator class with the cached state.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\traw: numpy.ndarray,\timage_embeddings: Dict[str, Any],\tsave_path: Union[str, os.PathLike],\tverbose: bool = True,\ti: Optional[int] = None,\t**kwargs) -> micro_sam.instance_segmentation.AMGBase:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state.cache_is_state", "modulename": "micro_sam.precompute_state", "qualname": "cache_is_state", "kind": "function", "doc": "

    Compute and cache or load the state for the automatic mask generator.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The instance segmentation class with the cached state.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tdecoder: torch.nn.modules.module.Module,\traw: numpy.ndarray,\timage_embeddings: Dict[str, Any],\tsave_path: Union[str, os.PathLike],\tverbose: bool = True,\ti: Optional[int] = None,\t**kwargs) -> micro_sam.instance_segmentation.AMGBase:", "funcdef": "def"}, {"fullname": "micro_sam.precompute_state.precompute_state", "modulename": "micro_sam.precompute_state", "qualname": "precompute_state", "kind": "function", "doc": "

    Precompute the image embeddings and other optional state for the input image(s).

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tinput_path: Union[os.PathLike, str],\toutput_path: Union[os.PathLike, str],\tmodel_type: str = 'vit_l',\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\tkey: Optional[str] = None,\tndim: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tprecompute_amg_state: bool = False) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation", "modulename": "micro_sam.prompt_based_segmentation", "kind": "module", "doc": "

    Functions for prompt-based segmentation with Segment Anything.

    \n"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_points", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_points", "kind": "function", "doc": "

    Segmentation from point prompts.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tpoints: numpy.ndarray,\tlabels: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\tuse_best_multimask: Optional[bool] = None):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_mask", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_mask", "kind": "function", "doc": "

    Segmentation from a mask prompt.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tmask: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tuse_box: bool = True,\tuse_mask: bool = True,\tuse_points: bool = False,\toriginal_size: Optional[Tuple[int, ...]] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\treturn_logits: bool = False,\tbox_extension: float = 0.0,\tbox: Optional[numpy.ndarray] = None,\tpoints: Optional[numpy.ndarray] = None,\tlabels: Optional[numpy.ndarray] = None,\tuse_single_point: bool = False):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_box", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_box", "kind": "function", "doc": "

    Segmentation from a box prompt.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tbox: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False,\tbox_extension: float = 0.0):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_based_segmentation.segment_from_box_and_points", "modulename": "micro_sam.prompt_based_segmentation", "qualname": "segment_from_box_and_points", "kind": "function", "doc": "

    Segmentation from a box prompt and point prompts.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The binary segmentation mask.

    \n
    \n", "signature": "(\tpredictor: segment_anything.predictor.SamPredictor,\tbox: numpy.ndarray,\tpoints: numpy.ndarray,\tlabels: numpy.ndarray,\timage_embeddings: Optional[Dict[str, Any]] = None,\ti: Optional[int] = None,\tmultimask_output: bool = False,\treturn_all: bool = False):", "funcdef": "def"}, {"fullname": "micro_sam.prompt_generators", "modulename": "micro_sam.prompt_generators", "kind": "module", "doc": "

    Classes for generating prompts from ground-truth segmentation masks.\nFor training or evaluation of prompt-based segmentation.

    \n"}, {"fullname": "micro_sam.prompt_generators.PromptGeneratorBase", "modulename": "micro_sam.prompt_generators", "qualname": "PromptGeneratorBase", "kind": "class", "doc": "

    PromptGeneratorBase is an interface to implement specific prompt generators.

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator", "kind": "class", "doc": "

    Generate point and/or box prompts from an instance segmentation.

    \n\n

    You can use this class to derive prompts from an instance segmentation, either for\nevaluation purposes or for training Segment Anything on custom data.\nIn order to use this generator you need to precompute the bounding boxes and center\ncoordiantes of the instance segmentation, using e.g. util.get_centers_and_bounding_boxes.

    \n\n

    Here's an example for how to use this class:

    \n\n
    \n
    # Initialize generator for 1 positive and 4 negative point prompts.\nprompt_generator = PointAndBoxPromptGenerator(1, 4, dilation_strength=8)\n\n# Precompute the bounding boxes for the given segmentation\nbounding_boxes, _ = util.get_centers_and_bounding_boxes(segmentation)\n\n# generate point prompts for the objects with ids 1, 2 and 3\nseg_ids = (1, 2, 3)\nobject_mask = np.stack([segmentation == seg_id for seg_id in seg_ids])[:, None]\nthis_bounding_boxes = [bounding_boxes[seg_id] for seg_id in seg_ids]\npoint_coords, point_labels, _, _ = prompt_generator(object_mask, this_bounding_boxes)\n
    \n
    \n\n
    Arguments:
    \n\n\n", "bases": "PromptGeneratorBase"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.__init__", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tn_positive_points: int,\tn_negative_points: int,\tdilation_strength: int,\tget_point_prompts: bool = True,\tget_box_prompts: bool = False)"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.n_positive_points", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.n_positive_points", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.n_negative_points", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.n_negative_points", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.dilation_strength", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.dilation_strength", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.get_box_prompts", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.get_box_prompts", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.PointAndBoxPromptGenerator.get_point_prompts", "modulename": "micro_sam.prompt_generators", "qualname": "PointAndBoxPromptGenerator.get_point_prompts", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.prompt_generators.IterativePromptGenerator", "modulename": "micro_sam.prompt_generators", "qualname": "IterativePromptGenerator", "kind": "class", "doc": "

    Generate point prompts from an instance segmentation iteratively.

    \n", "bases": "PromptGeneratorBase"}, {"fullname": "micro_sam.sam_annotator", "modulename": "micro_sam.sam_annotator", "kind": "module", "doc": "

    The interactive annotation tools.

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_2d.Annotator2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "qualname": "Annotator2d", "kind": "class", "doc": "

    Base class for micro_sam annotation plugins.

    \n\n

    Implements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.

    \n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_2d.Annotator2d.__init__", "modulename": "micro_sam.sam_annotator.annotator_2d", "qualname": "Annotator2d.__init__", "kind": "function", "doc": "

    Create the annotator GUI.

    \n\n
    Arguments:
    \n\n\n", "signature": "(viewer: napari.viewer.Viewer)"}, {"fullname": "micro_sam.sam_annotator.annotator_2d.annotator_2d", "modulename": "micro_sam.sam_annotator.annotator_2d", "qualname": "annotator_2d", "kind": "function", "doc": "

    Start the 2d annotation tool for a given image.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tsegmentation_result: Optional[numpy.ndarray] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.annotator_3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_3d.Annotator3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "qualname": "Annotator3d", "kind": "class", "doc": "

    Base class for micro_sam annotation plugins.

    \n\n

    Implements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.

    \n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_3d.Annotator3d.__init__", "modulename": "micro_sam.sam_annotator.annotator_3d", "qualname": "Annotator3d.__init__", "kind": "function", "doc": "

    Create the annotator GUI.

    \n\n
    Arguments:
    \n\n\n", "signature": "(viewer: napari.viewer.Viewer)"}, {"fullname": "micro_sam.sam_annotator.annotator_3d.annotator_3d", "modulename": "micro_sam.sam_annotator.annotator_3d", "qualname": "annotator_3d", "kind": "function", "doc": "

    Start the 3d annotation tool for a given image volume.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tsegmentation_result: Optional[numpy.ndarray] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking.AnnotatorTracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "qualname": "AnnotatorTracking", "kind": "class", "doc": "

    Base class for micro_sam annotation plugins.

    \n\n

    Implements the logic for the 2d, 3d and tracking annotator.\nThe annotators differ in their data dimensionality and the widgets.

    \n", "bases": "micro_sam.sam_annotator._annotator._AnnotatorBase"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking.AnnotatorTracking.__init__", "modulename": "micro_sam.sam_annotator.annotator_tracking", "qualname": "AnnotatorTracking.__init__", "kind": "function", "doc": "

    Create the annotator GUI.

    \n\n
    Arguments:
    \n\n\n", "signature": "(viewer: napari.viewer.Viewer)"}, {"fullname": "micro_sam.sam_annotator.annotator_tracking.annotator_tracking", "modulename": "micro_sam.sam_annotator.annotator_tracking", "qualname": "annotator_tracking", "kind": "function", "doc": "

    Start the tracking annotation tool fora given timeseries.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\timage: numpy.ndarray,\tembedding_path: Optional[str] = None,\tmodel_type: str = 'vit_l',\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\treturn_viewer: bool = False,\tviewer: Optional[napari.viewer.Viewer] = None,\tcheckpoint_path: Optional[str] = None,\tdevice: Union[str, torch.device, NoneType] = None) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.image_series_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "image_series_annotator", "kind": "function", "doc": "

    Run the annotation tool for a series of images (supported for both 2d and 3d images).

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\timages: Union[List[Union[os.PathLike, str]], List[numpy.ndarray]],\toutput_folder: str,\tmodel_type: str = 'vit_l',\tembedding_path: Optional[str] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tviewer: Optional[napari.viewer.Viewer] = None,\treturn_viewer: bool = False,\tprecompute_amg_state: bool = False,\tcheckpoint_path: Optional[str] = None,\tis_volumetric: bool = False,\tdevice: Union[str, torch.device, NoneType] = None,\tprefer_decoder: bool = True) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.image_folder_annotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "image_folder_annotator", "kind": "function", "doc": "

    Run the 2d annotation tool for a series of images in a folder.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The napari viewer, only returned if return_viewer=True.

    \n
    \n", "signature": "(\tinput_folder: str,\toutput_folder: str,\tpattern: str = '*',\tviewer: Optional[napari.viewer.Viewer] = None,\treturn_viewer: bool = False,\t**kwargs) -> Optional[napari.viewer.Viewer]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator", "kind": "class", "doc": "

    QWidget(parent: typing.Optional[QWidget] = None, flags: Union[Qt.WindowFlags, Qt.WindowType] = Qt.WindowFlags())

    \n", "bases": "micro_sam.sam_annotator._widgets._WidgetBase"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator.__init__", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator.__init__", "kind": "function", "doc": "

    \n", "signature": "(viewer: napari.viewer.Viewer, parent=None)"}, {"fullname": "micro_sam.sam_annotator.image_series_annotator.ImageSeriesAnnotator.run_button", "modulename": "micro_sam.sam_annotator.image_series_annotator", "qualname": "ImageSeriesAnnotator.run_button", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.training_ui", "modulename": "micro_sam.sam_annotator.training_ui", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget", "kind": "class", "doc": "

    QWidget(parent: typing.Optional[QWidget] = None, flags: Union[Qt.WindowFlags, Qt.WindowType] = Qt.WindowFlags())

    \n", "bases": "micro_sam.sam_annotator._widgets._WidgetBase"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget.__init__", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget.__init__", "kind": "function", "doc": "

    \n", "signature": "(parent=None)"}, {"fullname": "micro_sam.sam_annotator.training_ui.TrainingWidget.run_button", "modulename": "micro_sam.sam_annotator.training_ui", "qualname": "TrainingWidget.run_button", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.util", "modulename": "micro_sam.sam_annotator.util", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.sam_annotator.util.point_layer_to_prompts", "modulename": "micro_sam.sam_annotator.util", "qualname": "point_layer_to_prompts", "kind": "function", "doc": "

    Extract point prompts for SAM from a napari point layer.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The point coordinates for the prompts.\n The labels (positive or negative / 1 or 0) for the prompts.

    \n
    \n", "signature": "(\tlayer: napari.layers.points.points.Points,\ti=None,\ttrack_id=None,\twith_stop_annotation=True) -> Optional[Tuple[numpy.ndarray, numpy.ndarray]]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.shape_layer_to_prompts", "modulename": "micro_sam.sam_annotator.util", "qualname": "shape_layer_to_prompts", "kind": "function", "doc": "

    Extract prompts for SAM from a napari shape layer.

    \n\n

    Extracts the bounding box for 'rectangle' shapes and the bounding box and corresponding mask\nfor 'ellipse' and 'polygon' shapes.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The box prompts.\n The mask prompts.

    \n
    \n", "signature": "(\tlayer: napari.layers.shapes.shapes.Shapes,\tshape: Tuple[int, int],\ti=None,\ttrack_id=None) -> Tuple[List[numpy.ndarray], List[Optional[numpy.ndarray]]]:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.prompt_layer_to_state", "modulename": "micro_sam.sam_annotator.util", "qualname": "prompt_layer_to_state", "kind": "function", "doc": "

    Get the state of the track from a point layer for a given timeframe.

    \n\n

    Only relevant for annotator_tracking.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The state of this frame (either \"division\" or \"track\").

    \n
    \n", "signature": "(prompt_layer: napari.layers.points.points.Points, i: int) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sam_annotator.util.prompt_layers_to_state", "modulename": "micro_sam.sam_annotator.util", "qualname": "prompt_layers_to_state", "kind": "function", "doc": "

    Get the state of the track from a point layer and shape layer for a given timeframe.

    \n\n

    Only relevant for annotator_tracking.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The state of this frame (either \"division\" or \"track\").

    \n
    \n", "signature": "(\tpoint_layer: napari.layers.points.points.Points,\tbox_layer: napari.layers.shapes.shapes.Shapes,\ti: int) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data", "modulename": "micro_sam.sample_data", "kind": "module", "doc": "

    Sample microscopy data.

    \n\n

    You can change the download location for sample data and model weights\nby setting the environment variable: MICROSAM_CACHEDIR

    \n\n

    By default sample data is downloaded to a folder named 'micro_sam/sample_data'\ninside your default cache directory, eg:\n * Mac: ~/Library/Caches/\n * Unix: ~/.cache/ or the value of the XDG_CACHE_HOME environment variable, if defined.\n * Windows: C:\\Users<user>\\AppData\\Local<AppAuthor><AppName>\\Cache

    \n"}, {"fullname": "micro_sam.sample_data.fetch_image_series_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_image_series_example_data", "kind": "function", "doc": "

    Download the sample images for the image series annotator.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_image_series", "modulename": "micro_sam.sample_data", "qualname": "sample_data_image_series", "kind": "function", "doc": "

    Provides image series example image to napari.

    \n\n

    Opens as three separate image layers in napari (one per image in series).\nThe third image in the series has a different size and modality.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_wholeslide_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_wholeslide_example_data", "kind": "function", "doc": "

    Download the sample data for the 2d annotator.

    \n\n

    This downloads part of a whole-slide image from the NeurIPS Cell Segmentation Challenge.\nSee https://neurips22-cellseg.grand-challenge.org/ for details on the data.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_wholeslide", "modulename": "micro_sam.sample_data", "qualname": "sample_data_wholeslide", "kind": "function", "doc": "

    Provides wholeslide 2d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_livecell_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_livecell_example_data", "kind": "function", "doc": "

    Download the sample data for the 2d annotator.

    \n\n

    This downloads a single image from the LiveCELL dataset.\nSee https://doi.org/10.1038/s41592-021-01249-6 for details on the data.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_livecell", "modulename": "micro_sam.sample_data", "qualname": "sample_data_livecell", "kind": "function", "doc": "

    Provides livecell 2d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_hela_2d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_hela_2d_example_data", "kind": "function", "doc": "

    Download the sample data for the 2d annotator.

    \n\n

    This downloads a single image from the HeLa CTC dataset.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> Union[str, os.PathLike]:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_hela_2d", "modulename": "micro_sam.sample_data", "qualname": "sample_data_hela_2d", "kind": "function", "doc": "

    Provides HeLa 2d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_3d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_3d_example_data", "kind": "function", "doc": "

    Download the sample data for the 3d annotator.

    \n\n

    This downloads the Lucchi++ datasets from https://casser.io/connectomics/.\nIt is a dataset for mitochondria segmentation in EM.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_3d", "modulename": "micro_sam.sample_data", "qualname": "sample_data_3d", "kind": "function", "doc": "

    Provides Lucchi++ 3d example image to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_tracking_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_tracking_example_data", "kind": "function", "doc": "

    Download the sample data for the tracking annotator.

    \n\n

    This data is the cell tracking challenge dataset DIC-C2DH-HeLa.\nCell tracking challenge webpage: http://data.celltrackingchallenge.net\nHeLa cells on a flat glass\nDr. G. van Cappellen. Erasmus Medical Center, Rotterdam, The Netherlands\nTraining dataset: http://data.celltrackingchallenge.net/training-datasets/DIC-C2DH-HeLa.zip (37 MB)\nChallenge dataset: http://data.celltrackingchallenge.net/challenge-datasets/DIC-C2DH-HeLa.zip (41 MB)

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_tracking", "modulename": "micro_sam.sample_data", "qualname": "sample_data_tracking", "kind": "function", "doc": "

    Provides tracking example dataset to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_tracking_segmentation_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_tracking_segmentation_data", "kind": "function", "doc": "

    Download groundtruth segmentation for the tracking example data.

    \n\n

    This downloads the groundtruth segmentation for the image data from fetch_tracking_example_data.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The folder that contains the downloaded data.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.sample_data_segmentation", "modulename": "micro_sam.sample_data", "qualname": "sample_data_segmentation", "kind": "function", "doc": "

    Provides segmentation example dataset to napari.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.synthetic_data", "modulename": "micro_sam.sample_data", "qualname": "synthetic_data", "kind": "function", "doc": "

    Create synthetic image data and segmentation for training.

    \n", "signature": "(shape, seed=None):", "funcdef": "def"}, {"fullname": "micro_sam.sample_data.fetch_nucleus_3d_example_data", "modulename": "micro_sam.sample_data", "qualname": "fetch_nucleus_3d_example_data", "kind": "function", "doc": "

    Download the sample data for 3d segmentation of nuclei.

    \n\n

    This data contains a small crop from a volume from the publication\n\"Efficient automatic 3D segmentation of cell nuclei for high-content screening\"\nhttps://doi.org/10.1186/s12859-022-04737-4

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The path of the downloaded image.

    \n
    \n", "signature": "(save_directory: Union[str, os.PathLike]) -> str:", "funcdef": "def"}, {"fullname": "micro_sam.training", "modulename": "micro_sam.training", "kind": "module", "doc": "

    Functionality for training Segment Anything.

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer", "modulename": "micro_sam.training.joint_sam_trainer", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer", "kind": "class", "doc": "

    Trainer class for training the Segment Anything model.

    \n\n

    This class is derived from torch_em.trainer.DefaultTrainer.\nCheck out https://github.com/constantinpape/torch-em/blob/main/torch_em/trainer/default_trainer.py\nfor details on its usage and implementation.

    \n\n
    Arguments:
    \n\n\n", "bases": "micro_sam.training.sam_trainer.SamTrainer"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.__init__", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tunetr: torch.nn.modules.module.Module,\tinstance_loss: torch.nn.modules.module.Module,\tinstance_metric: torch.nn.modules.module.Module,\t**kwargs)"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.unetr", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.unetr", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.instance_loss", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.instance_loss", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.instance_metric", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.instance_metric", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.save_checkpoint", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.save_checkpoint", "kind": "function", "doc": "

    \n", "signature": "(self, name, current_metric, best_metric, **extra_save_dict):", "funcdef": "def"}, {"fullname": "micro_sam.training.joint_sam_trainer.JointSamTrainer.load_checkpoint", "modulename": "micro_sam.training.joint_sam_trainer", "qualname": "JointSamTrainer.load_checkpoint", "kind": "function", "doc": "

    \n", "signature": "(self, checkpoint='best'):", "funcdef": "def"}, {"fullname": "micro_sam.training.sam_trainer", "modulename": "micro_sam.training.sam_trainer", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer", "kind": "class", "doc": "

    Trainer class for training the Segment Anything model.

    \n\n

    This class is derived from torch_em.trainer.DefaultTrainer.\nCheck out https://github.com/constantinpape/torch-em/blob/main/torch_em/trainer/default_trainer.py\nfor details on its usage and implementation.

    \n\n
    Arguments:
    \n\n\n", "bases": "torch_em.trainer.default_trainer.DefaultTrainer"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.__init__", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.__init__", "kind": "function", "doc": "

    \n", "signature": "(\tconvert_inputs,\tn_sub_iteration: int,\tn_objects_per_batch: Optional[int] = None,\tmse_loss: torch.nn.modules.module.Module = MSELoss(),\tprompt_generator: micro_sam.prompt_generators.PromptGeneratorBase = <micro_sam.prompt_generators.IterativePromptGenerator object>,\tmask_prob: float = 0.5,\t**kwargs)"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.convert_inputs", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.convert_inputs", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.mse_loss", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.mse_loss", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.n_objects_per_batch", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.n_objects_per_batch", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.n_sub_iteration", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.n_sub_iteration", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.prompt_generator", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.prompt_generator", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.sam_trainer.SamTrainer.mask_prob", "modulename": "micro_sam.training.sam_trainer", "qualname": "SamTrainer.mask_prob", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam", "modulename": "micro_sam.training.trainable_sam", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM", "kind": "class", "doc": "

    Wrapper to make the SegmentAnything model trainable.

    \n\n
    Arguments:
    \n\n\n", "bases": "torch.nn.modules.module.Module"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.__init__", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.__init__", "kind": "function", "doc": "

    Initializes internal Module state, shared by both nn.Module and ScriptModule.

    \n", "signature": "(\tsam: segment_anything.modeling.sam.Sam,\tdevice: Union[str, torch.device])"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.sam", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.sam", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.device", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.device", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.transform", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.transform", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.preprocess", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.preprocess", "kind": "function", "doc": "

    Resize, normalize pixel values and pad to a square input.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The resized, normalized and padded tensor.\n The shape of the image after resizing.

    \n
    \n", "signature": "(self, x: torch.Tensor) -> Tuple[torch.Tensor, Tuple[int, int]]:", "funcdef": "def"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.image_embeddings_oft", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.image_embeddings_oft", "kind": "function", "doc": "

    \n", "signature": "(self, batched_inputs):", "funcdef": "def"}, {"fullname": "micro_sam.training.trainable_sam.TrainableSAM.forward", "modulename": "micro_sam.training.trainable_sam", "qualname": "TrainableSAM.forward", "kind": "function", "doc": "

    Forward pass.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The predicted segmentation masks and iou values.

    \n
    \n", "signature": "(\tself,\tbatched_inputs: List[Dict[str, Any]],\timage_embeddings: torch.Tensor,\tmultimask_output: bool = False) -> List[Dict[str, Any]]:", "funcdef": "def"}, {"fullname": "micro_sam.training.training", "modulename": "micro_sam.training.training", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.training.FilePath", "modulename": "micro_sam.training.training", "qualname": "FilePath", "kind": "variable", "doc": "

    \n", "default_value": "typing.Union[str, os.PathLike]"}, {"fullname": "micro_sam.training.training.train_sam", "modulename": "micro_sam.training.training", "qualname": "train_sam", "kind": "function", "doc": "

    Run training for a SAM model.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tname: str,\tmodel_type: str,\ttrain_loader: torch.utils.data.dataloader.DataLoader,\tval_loader: torch.utils.data.dataloader.DataLoader,\tn_epochs: int = 100,\tearly_stopping: Optional[int] = 10,\tn_objects_per_batch: Optional[int] = 25,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\twith_segmentation_decoder: bool = True,\tfreeze: Optional[List[str]] = None,\tdevice: Union[str, torch.device, NoneType] = None,\tlr: float = 1e-05,\tn_sub_iteration: int = 8,\tsave_root: Union[os.PathLike, str, NoneType] = None,\tmask_prob: float = 0.5,\tn_iterations: Optional[int] = None,\tscheduler_class: Optional[torch.optim.lr_scheduler._LRScheduler] = <class 'torch.optim.lr_scheduler.ReduceLROnPlateau'>,\tscheduler_kwargs: Optional[Dict[str, Any]] = None,\tsave_every_kth_epoch: Optional[int] = None,\tpbar_signals: Optional[PyQt5.QtCore.QObject] = None) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.training.training.default_sam_dataset", "modulename": "micro_sam.training.training", "qualname": "default_sam_dataset", "kind": "function", "doc": "

    Create a PyTorch Dataset for training a SAM model.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The dataset.

    \n
    \n", "signature": "(\traw_paths: Union[List[Union[os.PathLike, str]], str, os.PathLike],\traw_key: Optional[str],\tlabel_paths: Union[List[Union[os.PathLike, str]], str, os.PathLike],\tlabel_key: Optional[str],\tpatch_shape: Tuple[int],\twith_segmentation_decoder: bool,\twith_channels: bool = False,\tsampler=None,\tn_samples: Optional[int] = None,\tis_train: bool = True,\t**kwargs) -> torch.utils.data.dataset.Dataset:", "funcdef": "def"}, {"fullname": "micro_sam.training.training.default_sam_loader", "modulename": "micro_sam.training.training", "qualname": "default_sam_loader", "kind": "function", "doc": "

    \n", "signature": "(**kwargs) -> torch.utils.data.dataloader.DataLoader:", "funcdef": "def"}, {"fullname": "micro_sam.training.training.CONFIGURATIONS", "modulename": "micro_sam.training.training", "qualname": "CONFIGURATIONS", "kind": "variable", "doc": "

    Best training configurations for given hardware resources.

    \n", "default_value": "{'Minimal': {'model_type': 'vit_t', 'n_objects_per_batch': 4, 'n_sub_iteration': 4}, 'CPU': {'model_type': 'vit_b', 'n_objects_per_batch': 10}, 'gtx1080': {'model_type': 'vit_t', 'n_objects_per_batch': 5}, 'rtx5000': {'model_type': 'vit_b', 'n_objects_per_batch': 10}, 'V100': {'model_type': 'vit_b'}, 'A100': {'model_type': 'vit_h'}}"}, {"fullname": "micro_sam.training.training.train_sam_for_configuration", "modulename": "micro_sam.training.training", "qualname": "train_sam_for_configuration", "kind": "function", "doc": "

    Run training for a SAM model with the configuration for a given hardware resource.

    \n\n

    Selects the best training settings for the given configuration.\nThe available configurations are listed in CONFIGURATIONS.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tname: str,\tconfiguration: str,\ttrain_loader: torch.utils.data.dataloader.DataLoader,\tval_loader: torch.utils.data.dataloader.DataLoader,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\twith_segmentation_decoder: bool = True,\t**kwargs) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.training.util", "modulename": "micro_sam.training.util", "kind": "module", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.identity", "modulename": "micro_sam.training.util", "qualname": "identity", "kind": "function", "doc": "

    Identity transformation.

    \n\n

    This is a helper function to skip data normalization when finetuning SAM.\nData normalization is performed within the model and should thus be skipped as\na preprocessing step in training.

    \n", "signature": "(x):", "funcdef": "def"}, {"fullname": "micro_sam.training.util.require_8bit", "modulename": "micro_sam.training.util", "qualname": "require_8bit", "kind": "function", "doc": "

    Transformation to require 8bit input data range (0-255).

    \n", "signature": "(x):", "funcdef": "def"}, {"fullname": "micro_sam.training.util.get_trainable_sam_model", "modulename": "micro_sam.training.util", "qualname": "get_trainable_sam_model", "kind": "function", "doc": "

    Get the trainable sam model.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The trainable segment anything model.

    \n
    \n", "signature": "(\tmodel_type: str = 'vit_l',\tdevice: Union[str, torch.device, NoneType] = None,\tcheckpoint_path: Union[os.PathLike, str, NoneType] = None,\tfreeze: Optional[List[str]] = None,\treturn_state: bool = False) -> micro_sam.training.trainable_sam.TrainableSAM:", "funcdef": "def"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs", "kind": "class", "doc": "

    Convert outputs of data loader to the expected batched inputs of the SegmentAnything model.

    \n\n
    Arguments:
    \n\n\n"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.__init__", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.__init__", "kind": "function", "doc": "

    \n", "signature": "(\ttransform: Optional[segment_anything.utils.transforms.ResizeLongestSide],\tdilation_strength: int = 10,\tbox_distortion_factor: Optional[float] = None)"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.dilation_strength", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.dilation_strength", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.transform", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.transform", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ConvertToSamInputs.box_distortion_factor", "modulename": "micro_sam.training.util", "qualname": "ConvertToSamInputs.box_distortion_factor", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo", "kind": "class", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.__init__", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.__init__", "kind": "function", "doc": "

    \n", "signature": "(desired_shape, do_rescaling=True, padding='constant')"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.desired_shape", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.desired_shape", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.padding", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.padding", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeRawTrafo.do_rescaling", "modulename": "micro_sam.training.util", "qualname": "ResizeRawTrafo.do_rescaling", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo", "kind": "class", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.__init__", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.__init__", "kind": "function", "doc": "

    \n", "signature": "(desired_shape, padding='constant', min_size=0)"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.desired_shape", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.desired_shape", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.padding", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.padding", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.training.util.ResizeLabelTrafo.min_size", "modulename": "micro_sam.training.util", "qualname": "ResizeLabelTrafo.min_size", "kind": "variable", "doc": "

    \n"}, {"fullname": "micro_sam.util", "modulename": "micro_sam.util", "kind": "module", "doc": "

    Helper functions for downloading Segment Anything models and predicting image embeddings.

    \n"}, {"fullname": "micro_sam.util.get_cache_directory", "modulename": "micro_sam.util", "qualname": "get_cache_directory", "kind": "function", "doc": "

    Get micro-sam cache directory location.

    \n\n

    Users can set the MICROSAM_CACHEDIR environment variable for a custom cache directory.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.microsam_cachedir", "modulename": "micro_sam.util", "qualname": "microsam_cachedir", "kind": "function", "doc": "

    Return the micro-sam cache directory.

    \n\n

    Returns the top level cache directory for micro-sam models and sample data.

    \n\n

    Every time this function is called, we check for any user updates made to\nthe MICROSAM_CACHEDIR os environment variable since the last time.

    \n", "signature": "() -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.models", "modulename": "micro_sam.util", "qualname": "models", "kind": "function", "doc": "

    Return the segmentation models registry.

    \n\n

    We recreate the model registry every time this function is called,\nso any user changes to the default micro-sam cache directory location\nare respected.

    \n", "signature": "():", "funcdef": "def"}, {"fullname": "micro_sam.util.get_device", "modulename": "micro_sam.util", "qualname": "get_device", "kind": "function", "doc": "

    Get the torch device.

    \n\n

    If no device is passed the default device for your system is used.\nElse it will be checked if the device you have passed is supported.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The device.

    \n
    \n", "signature": "(\tdevice: Union[str, torch.device, NoneType] = None) -> Union[str, torch.device]:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_sam_model", "modulename": "micro_sam.util", "qualname": "get_sam_model", "kind": "function", "doc": "

    Get the SegmentAnything Predictor.

    \n\n

    This function will download the required model or load it from the cached weight file.\nThis location of the cache can be changed by setting the environment variable: MICROSAM_CACHEDIR.\nThe name of the requested model can be set via model_type.\nSee https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models\nfor an overview of the available models

    \n\n

    Alternatively this function can also load a model from weights stored in a local filepath.\nThe corresponding file path is given via checkpoint_path. In this case model_type\nmust be given as the matching encoder architecture, e.g. \"vit_b\" if the weights are for\na SAM model with vit_b encoder.

    \n\n

    By default the models are downloaded to a folder named 'micro_sam/models'\ninside your default cache directory, eg:

    \n\n\n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The segment anything predictor.

    \n
    \n", "signature": "(\tmodel_type: str = 'vit_l',\tdevice: Union[str, torch.device, NoneType] = None,\tcheckpoint_path: Union[str, os.PathLike, NoneType] = None,\treturn_sam: bool = False,\treturn_state: bool = False) -> mobile_sam.predictor.SamPredictor:", "funcdef": "def"}, {"fullname": "micro_sam.util.export_custom_sam_model", "modulename": "micro_sam.util", "qualname": "export_custom_sam_model", "kind": "function", "doc": "

    Export a finetuned segment anything model to the standard model format.

    \n\n

    The exported model can be used by the interactive annotation tools in micro_sam.annotator.

    \n\n
    Arguments:
    \n\n\n", "signature": "(\tcheckpoint_path: Union[str, os.PathLike],\tmodel_type: str,\tsave_path: Union[str, os.PathLike]) -> None:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_model_names", "modulename": "micro_sam.util", "qualname": "get_model_names", "kind": "function", "doc": "

    \n", "signature": "() -> Iterable:", "funcdef": "def"}, {"fullname": "micro_sam.util.precompute_image_embeddings", "modulename": "micro_sam.util", "qualname": "precompute_image_embeddings", "kind": "function", "doc": "

    Compute the image embeddings (output of the encoder) for the input.

    \n\n

    If 'save_path' is given the embeddings will be loaded/saved in a zarr container.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The image embeddings.

    \n
    \n", "signature": "(\tpredictor: mobile_sam.predictor.SamPredictor,\tinput_: numpy.ndarray,\tsave_path: Union[str, os.PathLike, NoneType] = None,\tlazy_loading: bool = False,\tndim: Optional[int] = None,\ttile_shape: Optional[Tuple[int, int]] = None,\thalo: Optional[Tuple[int, int]] = None,\tverbose: bool = True,\tpbar_init: Optional[<built-in function callable>] = None,\tpbar_update: Optional[<built-in function callable>] = None) -> Dict[str, Any]:", "funcdef": "def"}, {"fullname": "micro_sam.util.set_precomputed", "modulename": "micro_sam.util", "qualname": "set_precomputed", "kind": "function", "doc": "

    Set the precomputed image embeddings for a predictor.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The predictor with set features.

    \n
    \n", "signature": "(\tpredictor: mobile_sam.predictor.SamPredictor,\timage_embeddings: Dict[str, Any],\ti: Optional[int] = None,\ttile_id: Optional[int] = None) -> mobile_sam.predictor.SamPredictor:", "funcdef": "def"}, {"fullname": "micro_sam.util.compute_iou", "modulename": "micro_sam.util", "qualname": "compute_iou", "kind": "function", "doc": "

    Compute the intersection over union of two masks.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The intersection over union of the two masks.

    \n
    \n", "signature": "(mask1: numpy.ndarray, mask2: numpy.ndarray) -> float:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_centers_and_bounding_boxes", "modulename": "micro_sam.util", "qualname": "get_centers_and_bounding_boxes", "kind": "function", "doc": "

    Returns the center coordinates of the foreground instances in the ground-truth.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    A dictionary that maps object ids to the corresponding centroid.\n A dictionary that maps object_ids to the corresponding bounding box.

    \n
    \n", "signature": "(\tsegmentation: numpy.ndarray,\tmode: str = 'v') -> Tuple[Dict[int, numpy.ndarray], Dict[int, tuple]]:", "funcdef": "def"}, {"fullname": "micro_sam.util.load_image_data", "modulename": "micro_sam.util", "qualname": "load_image_data", "kind": "function", "doc": "

    Helper function to load image data from file.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The image data.

    \n
    \n", "signature": "(\tpath: str,\tkey: Optional[str] = None,\tlazy_loading: bool = False) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.util.segmentation_to_one_hot", "modulename": "micro_sam.util", "qualname": "segmentation_to_one_hot", "kind": "function", "doc": "

    Convert the segmentation to one-hot encoded masks.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The one-hot encoded masks.

    \n
    \n", "signature": "(\tsegmentation: numpy.ndarray,\tsegmentation_ids: Optional[numpy.ndarray] = None) -> torch.Tensor:", "funcdef": "def"}, {"fullname": "micro_sam.util.get_block_shape", "modulename": "micro_sam.util", "qualname": "get_block_shape", "kind": "function", "doc": "

    Get a suitable block shape for chunking a given shape.

    \n\n

    The primary use for this is determining chunk sizes for\nzarr arrays or block shapes for parallelization.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The block shape.

    \n
    \n", "signature": "(shape: Tuple[int]) -> Tuple[int]:", "funcdef": "def"}, {"fullname": "micro_sam.visualization", "modulename": "micro_sam.visualization", "kind": "module", "doc": "

    Functionality for visualizing image embeddings.

    \n"}, {"fullname": "micro_sam.visualization.compute_pca", "modulename": "micro_sam.visualization", "qualname": "compute_pca", "kind": "function", "doc": "

    Compute the pca projection of the embeddings to visualize them as RGB image.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    PCA of the embeddings, mapped to the pixels.

    \n
    \n", "signature": "(embeddings: numpy.ndarray) -> numpy.ndarray:", "funcdef": "def"}, {"fullname": "micro_sam.visualization.project_embeddings_for_visualization", "modulename": "micro_sam.visualization", "qualname": "project_embeddings_for_visualization", "kind": "function", "doc": "

    Project image embeddings to pixel-wise PCA.

    \n\n
    Arguments:
    \n\n\n\n
    Returns:
    \n\n
    \n

    The PCA of the embeddings.\n The scale factor for resizing to the original image size.

    \n
    \n", "signature": "(\timage_embeddings: Dict[str, Any]) -> Tuple[numpy.ndarray, Tuple[float, ...]]:", "funcdef": "def"}]; // mirrored in build-search-index.js (part 1) // Also split on html tags. this is a cheap heuristic, but good enough.