You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks very much for your solid work. I have a question about the training input patch size for single image superresolution. I just find that many works just use training patch size=96x96 for scale=2x SISR. However, many deeper networks (RCAN) have a larger Receptive Field. I wonder whether training patch size=96x96 for scale=2x is the best choice?
The text was updated successfully, but these errors were encountered:
Train an SR model with a large patch size improves the performance, but it also drastically increases the training time and the memory consumption so that most of the methods use limited (48x48 or 64x64 for x4 scale) patch size.
However, from my experience, I have observed that performance improvement of using a large patch size is marginal when the patch size is larger than the 64x64. This may be because the SR model refers to the very adjacent neighbors when they reconstruct the pixel, but still, this is not clearly proven.
Hi, thanks very much for your solid work. I have a question about the training input patch size for single image superresolution. I just find that many works just use training patch size=96x96 for scale=2x SISR. However, many deeper networks (RCAN) have a larger Receptive Field. I wonder whether training patch size=96x96 for scale=2x is the best choice?
The text was updated successfully, but these errors were encountered: