Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why is so slow? in one V100 GPU, 10 second per frame? #30

Open
xiaoxiongli opened this issue Apr 13, 2021 · 2 comments
Open

why is so slow? in one V100 GPU, 10 second per frame? #30

xiaoxiongli opened this issue Apr 13, 2021 · 2 comments

Comments

@xiaoxiongli
Copy link

Hi, @ALL, this is really good work, and I test it on DIV2k image, and 4 times SR from about resolution 400x500 --> 1600x2000, but is seems really slow: in one V100 GPU, it cost about 10 second per frame.

I wonder to know why is so slow, which part of the technology is slow? INN?

@pkuxmq
Copy link
Owner

pkuxmq commented Apr 13, 2021

Hi, we have tested the models on one Tesla P100 GPU, when the HR size is 1920 x 1080, the downscaling and upscaling times for 2 times scale IRN_x2 are both around 345ms, and for 4 times scale IRN_x4 are both around 520ms. It should not be much longer for 1600 x 2000 images. I wonder how you calculate the time, e.g. if you take the calculation of PSNR and SSIM on cpu into consideration?

@xiaoxiongli
Copy link
Author

xiaoxiongli commented May 28, 2021

sorry, I just see your print log. and I write some code to test the infercen time, it is about 1s in a 2K image, I use IRN_x4 and P100.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants