You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello author, I have checked the dataset you provided. Most of the images are blurry, but a few are clear. If all the images in the dataset were blurry, would your work still work?
The text was updated successfully, but these errors were encountered:
Only blurry images are used for training and those few clear images are for evaluation (as a groundtruth clean image to evaluate model performance under some metrics).
Hope this helps and please let me know if you have any other questions.
Hello, could you explain why the PSNR value I obtained by running the dataset from this paper in the original 3DGS is much higher than the one reported in your paper?
Hello, I think this is because the original 3DGS is unstable when trained with degraded images (blur, aberration, etc.).
Hence, the result PSNR can fluctuate as it depends on per-pixel loss, but I don’t think there will be a noticeable gap in LPIPS among different runs.
At the time of writing, we reported 3D-GS results with only one run, but even if it can reach higher PSNR after several runs, it still cannot deblur the scene and will be much worse than ours or other deblurring frameworks.
Hello author, I have checked the dataset you provided. Most of the images are blurry, but a few are clear. If all the images in the dataset were blurry, would your work still work?
The text was updated successfully, but these errors were encountered: