Runs a semantic segmentation of microscopy images with a U-Net based deep learning architecture and on-the-fly data augmentation.
Example command-line usage:
cd cellunet
python train.py -i data/nuc0.png -l data/labels0.tif -o output -n 100 -e 5 -p 256
python predict.py -i data/nuc1.png -w output/cnn_model_weights.hdf5 -o output
In brief, the model accepts as training data a series of images segmented into regions, with each region being given 1 of 3 possible labels. In the training data (where the model is trying to learn how to segment and identify cell nuclei), label 0 corresponds to the background, label 1 to the boundary, and label 2 to the interior of cell nuclei.
A few functions in utils are adopted from https://github.com/carpenterlab/unet4nuclei.
Tested with keras (2.0.0), tensorflow (1.8.0). For GPU, use Cuda 9.0, tensorflow (1.8.0), tensorflow-gpu (1.8.0), keras (2.0.0) Avoid Keras==2.2.
i
- image file pathl
- label file patho
- output directoryn
- number of steps (default 100)b
- number of batches (default 16)e
- number of epochs (default 50)p
- pixel size of image patches, has to be divisible by 8 (default 256)w
- hdf5 weight file pathq
- weights for loss function (default 1.0, 1.0, 1.0)
i
- image file pathw
- hdf5 weight file patho
- output directory
python train.py -i data/nuc0.png -l data/labels0.tif
python train.py -i data/nuc0.png -l data/labels0.tif -q 1 1 10 -o output
python train.py -i data/nuc0.png -l data/labels0.tif -w data/weights.tests.hdf5
python train.py -i data/nuc0.png / data/nuc0.png -l data/labels0.tif / data/labels1.tif
Images provided may have any number of color channels; note that in the training data nuc_0.png and nuc_1.png have only 1 color channel while composite_nuc.tif has 2 color channels.
python train.py -i data/composite_nuc.tif -l data/labels0.tif -o output -n 100 -e 5 -p 256
python predict.py -i data/composite_nuc.tif -w output/cnn_model_weights.hdf5 -o output
python predict.py -i data/nuc0.png -w data/cnn_model_weights.hdf5