-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Weird artifacts when testing on Kodak dataset #13
Comments
Hi, thanks for pointing out it. I have verified it and it is interesting that IRN_x4 does not have such artifacts on Kodak, but IRN_x2 has artifacts on 12 images (half of Kodak). Besides, I tested another IRN_x2 model (I finetuned the pretrained model with L1 loss for LR guidance and add random noise on LR images before upscale during training), the artifacts are largely reduced but about 5 images still have few artifacts. I did not observe such artifacts on Set5, Set14, B100, Urban100, and DIV2K for all scales. It may be related with network architecture, training methods, and training dataset. We will further explore it. |
I also see similar artifact on many images even with DiV2k. sounds like author did not see it? |
Hi @charliewang789 , what kind of artifact do you observe and what's the test setting? We do not observe artifacts on val_DIV2K, Could you please provide detailed information, e.g. which image in DIV2K and which scale? Thanks! |
The artifacts are likely to be caused by unrobustness of LR generation for some unseen distributions. One solution is also to finetune models on several samples of the target distribution. For example, if we finetune the model for several epochs on Kodak, the artifacts on Kodak with pretrained models can be removed. Another method is to increase the loss weight for LR guidance during training. This may lead to a slight performance drop for HR reconstruction (around 0.1-0.2 dB if we increase the weight by 10 times), but the restriction for valid LR image is stronger. Also, there may be other methods to regularize the model from overfitting that require exploration. |
Thanks for sharing the great work!
I had a test of IRN_x2 on the Kodak dataset (http://r0k.us/graphics/kodak/), which often adoped in multiple image-related tasks.
I saw some weird green shadow-like artifacts in many of the reconstructed images. both in LR and the reconstructed ones (like images shown below). Since the Kodak is also natural image dataset, I don't expect it has special characteristics or distributions compared to set5, set14, or DIV2K, which will lead to these artifacts. Did I ignore something or make something wrong? Do you observe similar artifacts in your experiments?
The text was updated successfully, but these errors were encountered: