-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why do you use nearest method for matching the resolution of (LR, HR) due to CutBlur ? #19
Comments
First of all, in our internal analysis, we have found that the nn-based upsample is better than the bicubic (although it's marginal). |
So, the LR input , SR output and HR GT have same size ? |
If LR and HR have same size at begin , after net.apply SR_size == 4 x LR.size, then how to calculate loss between SR and HR ? downscale HR or what ? Or just down(CutBlur(up(LR))) ? |
Umm, so the method is Cutblur(HR, LR) to replace HR rather LR ? |
... Forget it ... I just saw the first layer of these networks ... |
After cutblur, LR is now the same size as GT, do you want to downsample it by a factor of 4, f.object (LR, scale_factor=0.25, mode="nearest"), and bring it back to the small size? |
After cutblur, LR is now the same size as GT, do you want to downsample it by a factor of 4, F.interpolate(LR, scale_factor=0.25, mode="nearest"), and bring it back to the small size? |
I have a question about how to match he resolution of (LR, HR) due to CutBlur.
When I check the code about matching the resolution of (LR, HR) due to CutBlur,
I found using nearest.
match the resolution of (LR, HR) due to CutBlur
Why don't you use bicubic?
Most people use bicubic in super resolution.
Do you have some special things?
I am interest in your CutBlur.
Thank you for your attention.
The text was updated successfully, but these errors were encountered: