The goal is to denoise high noised images of stent using U-Net model. The implementation is avalaible in unet.py.
We generate gaussian noised image with standard deviation from 0.3 to 0.5.
Exemple of stent
Noised stent
We train using configuration file. You can make your configuration file and use the command:
make train cfg=[filename]
But to make several trainings with different parameters, we manage to use one unique configuation file and specify the paramters as arguments. Look at implementation in train.py for more context.
Use the following command for training:
make train cfg=configs/unet--noise-images-3k--bs1-lr0.01.yaml data=[data_name] bs=[batch_size] lr=[learning_rate]
data
string argument is the number of image. Look at folders in dataset after you create your data. You should use the string after -. Exemple to train on data-1k dataset, usedata=1k
.bs
argument for the batch_sizelr
argument for learning rate
We used training.sh to launch several trainings.
The batch size, the learning rate are the main hyperparameters we focused on. A learning rate of 0.01 usually leads to the best result but with dataset with much more images a learning rate of 0.1 gave best result. We use batch size of 1 or 4. Among the several training batch size of 1 seems to gave better results because there is much more randomness or variability into the training process. We trained all the models on 10 epochs. Some of them start diverging after epoch 5/6. It concerns especially training with larger dataset of noised images and learning rate of 0.1
Please view logs for comparing model with available denoising image on validation set during training. Use the command:
tensorboard --logdir logs_ --port 6006
Some denoised image on validation set:
-
data=90, batch_size=1, learning_rate=0.01, epoch=10 -> other denoised images
-
data=180, batch_size=1, learning_rate=0.01, epoch=10 -> other denoised images
-
data=270, batch_size=1, learning_rate=0.01, epoch=10 -> other denoised images
-
data=360, batch_size=4, learning_rate=0.01, epoch=10 -> other denoised images
-
data=540, batch_size=1, learning_rate=0.01, epoch=10 -> other denoised images
-
data=630, batch_size=1, learning_rate=0.01, epoch= -> other denoised images
-
data=720, batch_size=4, learning_rate=0.01, epoch= -> other denoised images
-
data=810, batch_size=1, learning_rate=0.01, epoch= -> other denoised images
-
data=1k, batch_size=4, learning_rate=0.01, epoch= -> other denoised images
-
data=2k, batch_size=4, learning_rate=0.1, epoch= -> other denoised images
-
data=3k, batch_size=1, learning_rate=0.01, epoch=7 -> other denoised images
-
data=3k, batch_size=4, learning_rate=0.1, epoch=7 -> other denoised images
-
data=3k, batch_size=4, learning_rate=0.01, epoch=10 -> other denoised images