You can Download the dataset here:PanNuke.
The dataset contains 3 folders: folder 1,2,3.
In the config file config, precise the folder location:
path_panuke = FOLDER_LOCATION
run
python np_to_images_folder.py
this code:
-
1.
$~~~~~$ Opens the images.npy and mask.npy for each folder -
2.
$~~~~~$ Corrects wrong annotations -
3.
$~~~~~$ Saves each patch and each ground truth annotations as .tif files
run
python train_test_val.py --ptrain 0.75 --pval 0.125
This code generates 3 random dataframes. Each dataframe contains filenames of the images belonging to the correponding split. You can choose p_train
, p_val
for the proportion of each split.(p_test = 1 - p_train-p_val
).
run
cd Autoencoder
python train.py
run
python generate_contour_inside.py --path_annotations
path_annotations
is the path of the annotations.
It can be the ground truth annotations or the predictions from a baseline nuclei segmentation on your images.
If you have a baseline nuclei segmentation of your images, store it in path_baseline\baseline
.
To convert the ground truth annotations to contours and masks choose path_annotations=path_gt
To convert the baseline predictions to contours and mask choose path_annotations=path_baseline
run
python fast_augment_save.py
If the baseline nuclei segmentation doesn't have much splits/merges.
This code create an augmented segmentation by adding split and merges errors on random nucleis in each images and save the new annotations, contours and insides in a new folder.
It also creates a clickmap for each image by comparing the ground truth to the baseline segmentation.
The click map has 4 channels:
- 1st channel is for False Positive nuclei (FP)
- 2nd channel is for merged nucleis
- 3rd channel is for splitted nucleis
- 4th channel is for FN nuclei (FN)
cd Click_ref
python train.py
This code uses the baseline segmentation and the click maps generated to train the click_ref module to reconstruct the ground truth.