diff --git a/README.md b/README.md index 2b14300..abc80c9 100644 --- a/README.md +++ b/README.md @@ -1,24 +1,23 @@ -# DLMUSE - Deep Learning MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters +### DLMUSE - Deep Learning MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters ## Overview -DLMUSE uses a trained [nnUNet](https://github.com/MIC-DKFZ/nnUNet/tree/nnunetv1) model to compute the segmentation of the brain into [MUSE](https://www.med.upenn.edu/cbica/sbia/muse.html) ROIs from the nifti image of the Intra Cranial Volume (ICV - see [DLICV method](https://github.com/CBICA/DLICV)), oriented in _**LPS**_ orientation. It produces the segmented brain, along with a .csv file of the calculated volumes of each ROI. +DLMUSE uses a trained [nnUNet](https://github.com/MIC-DKFZ/nnUNet/tree/nnunetv1) model to compute the segmentation of the brain into [MUSE](https://www.med.upenn.edu/cbica/sbia/muse.html) ROIs from the nifti image of the Intra Cranial Volume (ICV - see [DLICV method](https://github.com/CBICA/DLICV)), oriented in _**LPS**_ orientation. It produces the segmented brain, along with a .csv file of the calculated volumes of each ROI. -## Installation +### Installation -### As a python package +## As a python package ```bash pip install DLMUSE ``` -### Directly from this repository +## Directly from this repository ```bash git clone https://github.com/CBICA/DLMUSE cd DLMUSE -conda create -n DLMUSE -y python=3.8 && conda activate DLMUSE -pip install . +pip install -e . ``` ### Using docker @@ -31,49 +30,9 @@ docker pull aidinisg/dlmuse:0.0.1 A pre-trained nnUNet model can be found in the [DLMUSE-0.0.1 release](https://github.com/CBICA/DLMUSE/releases/tag/v0.0.1) as an [artifact](https://github.com/CBICA/DLMUSE/releases/download/v0.0.1/model.zip). Feel free to use it under the package's [license](LICENSE). -### Import as a python package - -```python -from DLMUSE.compute_icv import compute_volume - -# Assuming your nifti file is named 'input.nii.gz' -volume_image = compute_volume("input.nii.gz", "output.nii.gz", "path/to/model/") -``` - -### From the terminal - -```bash -DLMUSE --input input.nii.gz --output output.nii.gz --model path/to/model -``` - -Replace the `input.nii.gz` with the path to your input nifti file, as well as the model path. - -Example: - -Assuming a file structure like so: - +### From command line ```bash -. -├── in -│   ├── input1.nii.gz -│   ├── input2.nii.gz -│   └── input3.nii.gz -├── model -│   ├── fold_0 -│   ├── fold_2 -│   │   ├── debug.json -│   │   ├── model_final_checkpoint.model -│   │   ├── model_final_checkpoint.model.pkl -│   │   ├── model_latest.model -│   │   ├── model_latest.model.pkl -│   └── plans.pkl -└── out -``` - -An example command might be: - -```bash -DLMUSE --input path/to/input/ --output path/to/output/ --model path/to/model/ +DLMUSE -i "image_folder" -o "path to output folder" -m "path to model weights" -f 0 -tr nnUNetTrainer -c 3d_fullres -p nnUNetPlans -d "id" -device cuda/cpu/mps ``` ### Using the docker container @@ -108,15 +67,6 @@ Contributions are welcome! Please refer to our [CONTRIBUTING.md](CONTRIBUTING.md If you're a developer looking to contribute, you'll first need to set up a development environment. After cloning the repository, you can install the development dependencies with: ```bash -pip install -r requirements-test.txt +pip install -r requirements.txt ``` - This will install the packages required for running tests and formatting code. Please make sure to write tests for new code and run them before submitting a pull request. - -### Running Tests - -You can run the test suite with the following command: - -```bash -pytest -``` diff --git a/setup.py b/setup.py index b6554d4..f6c62ab 100644 --- a/setup.py +++ b/setup.py @@ -1,8 +1,7 @@ """Setup tool for DLMUSE.""" -import io -import os from pathlib import Path + from setuptools import find_packages, setup this_directory = Path(__file__).parent @@ -21,31 +20,26 @@ download_url="https://github.com/CBICA/DLICV/", url="https://github.com/CBICA/DLICV/", packages=find_packages(exclude=["tests", ".github"]), - install_requires=[ - "torch", - "pathlib", - "argparse", - "nnunetv2" - ], + install_requires=["torch", "pathlib", "argparse", "nnunetv2"], entry_points={"console_scripts": ["DLMUSE = src.__main__:main"]}, classifiers=[ - "Intended Audience :: Developers", - "Intended Audience :: Science/Research", - "Intended Audience :: Healthcare Industry", - "Programming Language :: Python :: 3", - "Topic :: Scientific/Engineering :: Artificial Intelligence", - "Topic :: Scientific/Engineering :: Image Processing", - "Topic :: Scientific/Engineering :: Medical Science Apps.", - ], + "Intended Audience :: Developers", + "Intended Audience :: Science/Research", + "Intended Audience :: Healthcare Industry", + "Programming Language :: Python :: 3", + "Topic :: Scientific/Engineering :: Artificial Intelligence", + "Topic :: Scientific/Engineering :: Image Processing", + "Topic :: Scientific/Engineering :: Medical Science Apps.", + ], license="By installing/using DeepMRSeg, the user agrees to the following license: See https://www.med.upenn.edu/cbica/software-agreement-non-commercial.html", - keywords = [ - 'deep learning', - 'image segmentation', - 'semantic segmentation', - 'medical image analysis', - 'medical image segmentation', - 'nnU-Net', - 'nnunet' - ], + keywords=[ + "deep learning", + "image segmentation", + "semantic segmentation", + "medical image analysis", + "medical image segmentation", + "nnU-Net", + "nnunet", + ], package_data={"DLMUSE": ["VERSION"]}, )