You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
tested, with LowDoesCTRecon with the same parameters for 3 epochs. Used original MSDNet (MS_D).
Recall that n2i2 averages over all target sets whilst n2i1 randomly picks one.
Results:
mean testing loss: n2i1: 0.9878, n2i2: 0.5104
testing std: n2i1: 0.0684, n2i2 0.0604
time / epoch: n2i1: 24.55s n2i2: 1m23s
Let me know what you think.
This gives us good idea that the code is doing something relatively decent, but we need to run a proper test, with proper evaluation, i.e. 100 epochs, at least 150 samples or so (or even better, full dataset).
Why? Looking at loss is always good, but sometimes the bug is in the loss evaluation etc, so it going down doesn't tells us necesarily that everything is correct, only suggests strongly that it will. Looking at testing, you show std? we need mean+std of e.g. PSNR and SSIM, plus a couple of visualization of [gt, result, noisy] pairs, to compare visually if the result is succesful.
Does not need to be with MSDNet, a different network will also do, but MSDNet is fine.
So: we need to run a proper experiment of N2I to verify the code. You can basically lock 1 of the GPUs to do this, as long as the training takes not much more than a day or two. just run it with e.g. nohup. LowDoesCTRecon is a great experiment to run it with :)
We need to test the new one (2) to verify its correct, and remove the old one.
The text was updated successfully, but these errors were encountered: