Skip to content

Commit

Permalink
update: notebook
Browse files Browse the repository at this point in the history
  • Loading branch information
soumik12345 committed Jan 16, 2024
1 parent 61f295d commit 32ec95a
Showing 1 changed file with 87 additions and 8 deletions.
95 changes: 87 additions & 8 deletions colabs/monai/3d_brain_tumor_segmentation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@
"source": [
"# Brain tumor 3D segmentation with MONAI and Weights & Biases\n",
"\n",
"This tutorial shows how to construct a training workflow of multi-labels segmentation task using [MONAI](https://github.com/Project-MONAI/MONAI) and use experiment tracking and data visualization features of [Weights & Biases](https://wandb.ai/site). The tutorial contains the following features:\n",
"This tutorial shows how to construct a training workflow of multi-labels 3D brain tumor segmentation task using [MONAI](https://github.com/Project-MONAI/MONAI) and use experiment tracking and data visualization features of [Weights & Biases](https://wandb.ai/site). The tutorial contains the following features:\n",
"\n",
"1. Initialize a Weights & Biases run and synchrozize all configs associated with the run for reproducibility.\n",
"2. MONAI transform API:\n",
" 1. MONAI Transforms for dictionary format data.\n",
" 2. How to define a new transform according to MONAI transform API.\n",
" 2. How to define a new transform according to MONAI `transforms` API.\n",
" 3. How to randomly adjust intensity for data augmentation.\n",
"3. Data Loading and Visualization:\n",
" 1. Load Nifti image with metadata, load a list of images and stack them.\n",
Expand All @@ -29,7 +29,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup and Installation\n",
"## 🌴 Setup and Installation\n",
"\n",
"First, let us install the latest version of both MONAI and Weights and Biases."
]
Expand Down Expand Up @@ -102,14 +102,37 @@
"wandb.login()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 🌳 Initialize a W&B Run\n",
"\n",
"We will start a new W&B run to start tracking our experiment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"wandb.init(project=\"monai-brain-tumor-segmentation\", job_type=\"test\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Use of proper config system is a recommended best practice for reproducible machine learning. We can track the hyperparameters for every experiment using W&B."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"wandb.init(project=\"monai-brain-tumor-segmentation\", job_type=\"test\")\n",
"\n",
"config = wandb.config\n",
"config.seed = 0\n",
"config.roi_size = [224, 224, 144]\n",
Expand All @@ -132,16 +155,38 @@
"config.max_prediction_images_visualized = 20"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We would also need to set the random seed for modules to enable or disable deterministic training."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"os.makedirs(config.dataset_dir, exist_ok=True)\n",
"os.makedirs(config.checkpoint_dir, exist_ok=True)\n",
"set_determinism(seed=config.seed)\n",
"\n",
"set_determinism(seed=config.seed)"
"# Create directories\n",
"os.makedirs(config.dataset_dir, exist_ok=True)\n",
"os.makedirs(config.checkpoint_dir, exist_ok=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 💿 Data Loading and Transformation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here we use the `monai.transforms` API to create a custom transform that converts the multi-classes labels into multi-labels segmentation task in one-hot format."
]
},
{
Expand All @@ -159,6 +204,8 @@
" The possible classes are TC (Tumor core), WT (Whole tumor)\n",
" and ET (Enhancing tumor).\n",
"\n",
" Reference: https://github.com/Project-MONAI/tutorials/blob/main/3d_segmentation/brats_segmentation_3d.ipynb\n",
"\n",
" \"\"\"\n",
"\n",
" def __call__(self, data):\n",
Expand All @@ -179,6 +226,13 @@
" return d"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we set up transforms for training and validation datasets respectively."
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down Expand Up @@ -226,6 +280,17 @@
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🍁 The Dataset\n",
"\n",
"The dataset that we will use for this experiment comes from http://medicaldecathlon.com/. We will use Multimodal multisite MRI data (FLAIR, T1w, T1gd, T2w) to segment Gliomas, necrotic/active tumour, and oedema. The dataset consists of 750 4D volumes (484 Training + 266 Testing).\n",
"\n",
"We will use the `DecathlonDataset` to automatically download and extract the dataset. It inherits MONAI `CacheDataset` which enables us to set `cache_num=N` to cache `N` items for training and use the default args to cache all the items for validation, depending on your memory size."
]
},
{
"cell_type": "code",
"execution_count": null,
Expand All @@ -252,6 +317,20 @@
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Note:** Instead of applying the `train_transform` to the `train_dataset`, we have applied `val_transform` to both the training and validation datasets. This is because, before training, we would be visualizing samples from both the splits of the dataset."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 📸 Visualizing the Dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down

0 comments on commit 32ec95a

Please sign in to comment.