Skip to content

Commit

Permalink
Merge branch 'develop' into dev-define-engines-abc
Browse files Browse the repository at this point in the history
  • Loading branch information
shaneahmed authored Dec 3, 2024
2 parents f6ba41f + 6b214fe commit 8d94094
Show file tree
Hide file tree
Showing 25 changed files with 5,901 additions and 5,348 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/python-package.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ jobs:
sudo apt update
sudo apt-get install -y libopenslide-dev openslide-tools libopenjp2-7 libopenjp2-tools
python -m pip install --upgrade pip
python -m pip install ruff==0.7.4 pytest pytest-cov pytest-runner
python -m pip install ruff==0.8.1 pytest pytest-cov pytest-runner
pip install -r requirements/requirements.txt
- name: Cache tiatoolbox static assets
uses: actions/cache@v3
Expand Down
4 changes: 2 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ repos:
- mdformat-black
- mdformat-myst
- repo: https://github.com/executablebooks/mdformat
rev: 0.7.18
rev: 0.7.19
hooks:
- id: mdformat
# Optionally add plugins
Expand Down Expand Up @@ -60,7 +60,7 @@ repos:
- id: rst-inline-touching-normal # Detect mistake of inline code touching normal text in rst.
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.7.4
rev: v0.8.1
hooks:
- id: ruff
args: [--fix, --exit-non-zero-on-fix]
Expand Down
18 changes: 9 additions & 9 deletions HISTORY.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@

### Major Updates and Feature Improvements

- Adds Python 3.11 support \[experimental\] #500
- Adds Python 3.11 support [experimental] #500
- Python 3.11 is not fully supported by `pytorch` https://github.com/pytorch/pytorch/issues/86566 and `openslide` https://github.com/openslide/openslide-python/pull/188
- Removes Python 3.7 support
- This allows upgrading all the dependencies which were dependent on an older version of Python.
Expand Down Expand Up @@ -181,7 +181,7 @@ None
- Adds DICE metric
- Adds [SCCNN](https://doi.org/10.1109/tmi.2016.2525803) architecture. \[[read the docs](https://tia-toolbox.readthedocs.io/en/develop/_autosummary/tiatoolbox.models.architecture.sccnn.SCCNN.html)\]
- Adds [MapDe](https://arxiv.org/abs/1806.06970) architecture. \[[read the docs](https://tia-toolbox.readthedocs.io/en/develop/_autosummary/tiatoolbox.models.architecture.mapde.MapDe.html)\]
- Adds support for reading MPP metadata from NGFF v0.4
- Adds support for reading MPP metadata from NGFF v0.4
- Adds enhancements to tiatoolbox.annotation.storage that are useful when using an AnnotationStore for visualization purposes.

### Changes to API
Expand All @@ -196,7 +196,7 @@ None
- Fixes nucleus_segmentor_engine for boundary artefacts
- Fixes the colorbar cropping in tests
- Adds citation in README.md and CITATION.cff to Nature Communications Medicine paper
- Fixes a bug #452 raised by @rogertrullo where only the numerator of the TIFF resolution tags was being read.
- Fixes a bug #452 raised by @rogertrullo where only the numerator of the TIFF resolution tags was being read.
- Fixes HoVer-Net+ post-processing to be inline with original work.
- Fixes a bug where an exception would be raised if the OME XML is missing objective power.

Expand Down Expand Up @@ -337,7 +337,7 @@ None
### Major Updates and Feature Improvements

- Adds nucleus instance segmentation base class
- Adds [HoVerNet](https://www.sciencedirect.com/science/article/abs/pii/S1361841519301045) architecture
- Adds [HoVerNet](https://www.sciencedirect.com/science/article/abs/pii/S1361841519301045) architecture
- Adds multi-task segmentor [HoVerNet+](https://arxiv.org/abs/2108.13904) model
- Adds <a href="https://www.thelancet.com/journals/landig/article/PIIS2589-7500(2100180-1/fulltext">IDaRS</a> pipeline
- Adds [SlideGraph](https://arxiv.org/abs/2110.06042) pipeline
Expand All @@ -358,7 +358,7 @@ None

### Bug Fixes and Other Changes

- Fixes Fix `filter_coordinates` read wrong resolutions for patch extraction
- Fixes `filter_coordinates` read wrong resolutions for patch extraction
- For `PatchPredictor`
- `ioconfig` will supersede everything
- if `ioconfig` is not provided
Expand Down Expand Up @@ -410,7 +410,7 @@ None
- Adds dependencies for tiffile, imagecodecs, zarr.
- Adds more stringent pre-commit checks
- Moved local test files into `tiatoolbox/data`.
- Fixed `Manifest.ini` and added `tiatoolbox/data`. This means that this directory will be downloaded with the package.
- Fixed `Manifest.ini` and added `tiatoolbox/data`. This means that this directory will be downloaded with the package.
- Using `pkg_resources` to properly load bundled resources (e.g. `target_image.png`) in `tiatoolbox.data`.
- Removed duplicate code in `conftest.py` for downloading remote files. This is now in `tiatoolbox.data._fetch_remote_file`.
- Fixes errors raised by new flake8 rules.
Expand Down Expand Up @@ -513,9 +513,9 @@ ______________________________________________________________________
- `read_bounds` takes a tuple (left, top, right, bottom) of coordinates in baseline (level 0) reference frame and returns a region bounded by those.
- `read_rect` takes one coordinate in baseline reference frame and an output size in pixels.
- Adds `VirtualWSIReader` as a subclass of WSIReader which can be used to read visual fields (tiles).
- `VirtualWSIReader` accepts ndarray or image path as input.
- Adds MPP fall back to standard TIFF resolution tags with warning.
- If OpenSlide cannot determine microns per pixel (`mpp`) from the metadata, checks the TIFF resolution units (TIFF tags: `ResolutionUnit`, `XResolution` and `YResolution`) to calculate MPP. Additionally, add function to estimate missing objective power if MPP is known of derived from TIFF resolution tags.
- `VirtualWSIReader` accepts ndarray or image path as input.
- Adds MPP fall back to standard TIFF resolution tags with warning.
- If OpenSlide cannot determine microns per pixel (`mpp`) from the metadata, checks the TIFF resolution units (TIFF tags: `ResolutionUnit`, `XResolution` and `YResolution`) to calculate MPP. Additionally, add function to estimate missing objective power if MPP is known of derived from TIFF resolution tags.
- Estimates missing objective power from MPP with warning.
- Adds example notebooks for stain normalisation and WSI reader.
- Adds caching to slide info property. This is done by checking if a private `self._m_info` exists and returning it if so, otherwise `self._info` is called to create the info for the first time (or to force regenerating) and the result is assigned to `self._m_info`. This could in future be made much simpler with the `functools.cached_property` decorator in Python 3.8+.
Expand Down
4 changes: 2 additions & 2 deletions benchmarks/annotation_store.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@
"from typing import TYPE_CHECKING, Any\n",
"\n",
"import numpy as np\n",
"from IPython.display import display\n",
"from IPython.display import display_svg\n",
"from matplotlib import patheffects\n",
"from matplotlib import pyplot as plt\n",
"from shapely import affinity\n",
Expand Down Expand Up @@ -444,7 +444,7 @@
],
"source": [
"for n in range(4):\n",
" display(cell_polygon(xy=(0, 0), n_points=20, repeat_first=False, seed=n))"
" display_svg(cell_polygon(xy=(0, 0), n_points=20, repeat_first=False, seed=n))"
]
},
{
Expand Down
Binary file added docs/images/feature_extraction.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
141 changes: 67 additions & 74 deletions examples/05-patch-prediction.ipynb

Large diffs are not rendered by default.

50 changes: 33 additions & 17 deletions examples/06-semantic-segmentation.ipynb

Large diffs are not rendered by default.

132 changes: 88 additions & 44 deletions examples/07-advanced-modeling.ipynb

Large diffs are not rendered by default.

26 changes: 13 additions & 13 deletions examples/08-nucleus-instance-segmentation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@
"source": [
"### GPU or CPU runtime\n",
"\n",
"Processes in this notebook can be accelerated by using a GPU. Therefore, whether you are running this notebook on your system or Colab, you need to check and specify if you are using GPU or CPU hardware acceleration. In Colab, you need to make sure that the runtime type is set to GPU in the *\"Runtime→Change runtime type→Hardware accelerator\"*. If you are *not* using GPU, consider changing the `ON_GPU` flag to `Flase` value, otherwise, some errors will be raised when running the following cells.\n",
"Processes in this notebook can be accelerated by using a GPU. Therefore, whether you are running this notebook on your system or Colab, you need to check and specify appropriate [device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) e.g., \"cuda\" or \"cpu\" whether you are using GPU or CPU. In Colab, you need to make sure that the runtime type is set to GPU in the *\"Runtime→Change runtime type→Hardware accelerator\"*. If you are *not* using GPU, consider changing the `device` flag to `cpu` value, otherwise, some errors will be raised when running the following cells.\n",
"\n"
]
},
Expand All @@ -173,8 +173,7 @@
},
"outputs": [],
"source": [
"# Should be changed to False if no cuda-enabled GPU is available.\n",
"ON_GPU = True # Default is True."
"device = \"cuda\" # Choose appropriate device"
]
},
{
Expand Down Expand Up @@ -356,7 +355,7 @@
" [img_file_name],\n",
" save_dir=\"sample_tile_results/\",\n",
" mode=\"tile\",\n",
" on_gpu=ON_GPU,\n",
" device=device,\n",
" crash_on_exception=True,\n",
")"
]
Expand Down Expand Up @@ -386,7 +385,7 @@
"\n",
"- `mode`: the mode of inference which can be set to either `'tile'` or `'wsi'` for plain histology images or structured whole slides images, respectively.\n",
"\n",
"- `on_gpu`: can be `True` or `False` to dictate running the computations on GPU or CPU.\n",
"- `device`: specify appropriate [device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) e.g., \"cuda\", \"cuda:0\", \"mps\", \"cpu\" etc.\n",
"\n",
"- `crash_on_exception`: If set to `True`, the running loop will crash if there is an error during processing a WSI. Otherwise, the loop will move on to the next image (wsi) for processing. We suggest that you first make sure that the prediction is working as expected by testing it on a couple of inputs and then set this flag to `False` to process large cohorts of inputs.\n",
"\n",
Expand Down Expand Up @@ -5615,13 +5614,13 @@
")\n",
"\n",
"# WSI prediction\n",
"# if ON_GPU=False, this part will take more than a couple of hours to process.\n",
"# if device=\"cpu\", this part will take more than a couple of hours to process.\n",
"wsi_output = inst_segmentor.predict(\n",
" [wsi_file_name],\n",
" masks=None,\n",
" save_dir=\"sample_wsi_results/\",\n",
" mode=\"wsi\",\n",
" on_gpu=ON_GPU,\n",
" device=device,\n",
" crash_on_exception=True,\n",
")"
]
Expand All @@ -5638,7 +5637,7 @@
"1. Setting `mode='wsi'` in the arguments to `predict` tells the program that the input are in WSI format.\n",
"1. `masks=None`: the `masks` argument to the `predict` function is handled in the same way as the imgs argument. It is a list of paths to the desired image masks. Patches from `imgs` are only processed if they are within a masked area of their corresponding `masks`. If not provided (`masks=None`), then a tissue mask is generated for whole-slide images or, for image tiles, the entire image is processed.\n",
"\n",
"The above code cell might take a while to process, especially if `ON_GPU=False`. The processing time mostly depends on the size of the input WSI.\n",
"The above code cell might take a while to process, especially if `device=\"cpu\"`. The processing time mostly depends on the size of the input WSI.\n",
"The output, `wsi_output`, of `predict` contains a list of paths to the input WSIs and the corresponding output results saved on disk. The results for nucleus instance segmentation in `'wsi'` mode are stored in a Python dictionary, in the same way as was done for `'tile'` mode.\n",
"We use `joblib` to load the outputs for this sample WSI and then inspect the results dictionary.\n",
"\n"
Expand Down Expand Up @@ -5788,11 +5787,12 @@
")\n",
"\n",
"color_dict = {\n",
" 0: (\"neoplastic epithelial\", (255, 0, 0)),\n",
" 1: (\"Inflammatory\", (255, 255, 0)),\n",
" 2: (\"Connective\", (0, 255, 0)),\n",
" 3: (\"Dead\", (0, 0, 0)),\n",
" 4: (\"non-neoplastic epithelial\", (0, 0, 255)),\n",
" 0: (\"background\", (255, 165, 0)),\n",
" 1: (\"neoplastic epithelial\", (255, 0, 0)),\n",
" 2: (\"Inflammatory\", (255, 255, 0)),\n",
" 3: (\"Connective\", (0, 255, 0)),\n",
" 4: (\"Dead\", (0, 0, 0)),\n",
" 5: (\"non-neoplastic epithelial\", (0, 0, 255)),\n",
"}\n",
"\n",
"# Create the overlay image\n",
Expand Down
29 changes: 14 additions & 15 deletions examples/09-multi-task-segmentation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {
"id": "UEIfjUTaJLPj",
"outputId": "e4f383f2-306d-4afd-cd82-fec14a184941",
Expand Down Expand Up @@ -169,13 +169,13 @@
"source": [
"### GPU or CPU runtime\n",
"\n",
"Processes in this notebook can be accelerated by using a GPU. Therefore, whether you are running this notebook on your system or Colab, you need to check and specify whether you are using GPU or CPU hardware acceleration. In Colab, make sure that the runtime type is set to GPU, using the menu *Runtime→Change runtime type→Hardware accelerator*. If you are *not* using GPU, change the `ON_GPU` flag to `False`.\n",
"Processes in this notebook can be accelerated by using a GPU. Therefore, whether you are running this notebook on your system or Colab, you need to check and specify appropriate [device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) e.g., \"cuda\" or \"cpu\" whether you are using GPU or CPU. In Colab, you need to make sure that the runtime type is set to GPU in the *\"Runtime→Change runtime type→Hardware accelerator\"*. If you are *not* using GPU, consider changing the `device` flag to `cpu` value, otherwise, some errors will be raised when running the following cells.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"metadata": {
"id": "haTA_oQIY1Vy",
"tags": [
Expand All @@ -184,8 +184,7 @@
},
"outputs": [],
"source": [
"# Should be changed to False if no cuda-enabled GPU is available.\n",
"ON_GPU = True # Default is True."
"device = \"cuda\" # Choose appropriate device"
]
},
{
Expand All @@ -205,7 +204,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
Expand Down Expand Up @@ -260,7 +259,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
Expand Down Expand Up @@ -335,7 +334,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
Expand Down Expand Up @@ -390,7 +389,7 @@
" [img_file_name],\n",
" save_dir=global_save_dir / \"sample_tile_results\",\n",
" mode=\"tile\",\n",
" on_gpu=ON_GPU,\n",
" device=device,\n",
" crash_on_exception=True,\n",
")"
]
Expand Down Expand Up @@ -418,7 +417,7 @@
"\n",
"- `mode`: the mode of inference which can be set to either `'tile'` or `'wsi'`, for plain histology images or structured whole slides images, respectively.\n",
"\n",
"- `on_gpu`: can be either `True` or `False` to dictate running the computations on GPU or CPU.\n",
"- `device`: specify appropriate [device](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) e.g., \"cuda\", \"cuda:0\", \"mps\", \"cpu\" etc.\n",
"\n",
"- `crash_on_exception`: If set to `True`, the running loop will crash if there is an error during processing a WSI. Otherwise, the loop will move on to the next image (wsi) for processing. We suggest that you first make sure that prediction is working as expected by testing it on a couple of inputs and then set this flag to `False` to process large cohorts of inputs.\n",
"\n",
Expand All @@ -430,7 +429,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
Expand Down Expand Up @@ -546,7 +545,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
Expand Down Expand Up @@ -595,7 +594,7 @@
" masks=None,\n",
" save_dir=global_save_dir / \"sample_wsi_results/\",\n",
" mode=\"wsi\",\n",
" on_gpu=ON_GPU,\n",
" device=device,\n",
" crash_on_exception=True,\n",
")"
]
Expand All @@ -612,13 +611,13 @@
"1. Setting `mode='wsi'` in the `predict` function indicates that we are predicting region segmentations for inputs in the form of WSIs.\n",
"1. `masks=None` in the `predict` function: the `masks` argument is a list of paths to the desired image masks. Patches from `imgs` are only processed if they are within a masked area of their corresponding `masks`. If not provided (`masks=None`), then either a tissue mask is automatically generated for whole-slide images or the entire image is processed as a collection of image tiles.\n",
"\n",
"The above cell might take a while to process, especially if you have set `ON_GPU=False`. The processing time depends on the size of the input WSI and the selected resolution. Here, we have not specified any values and we use the assumed input resolution (20x) of HoVer-Net+.\n",
"The above cell might take a while to process, especially if you have set `device=\"cpu\"`. The processing time depends on the size of the input WSI and the selected resolution. Here, we have not specified any values and we use the assumed input resolution (20x) of HoVer-Net+.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
Expand Down
Loading

0 comments on commit 8d94094

Please sign in to comment.