Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow inference speed #21

Open
manojs8473 opened this issue Apr 25, 2023 · 2 comments
Open

Slow inference speed #21

manojs8473 opened this issue Apr 25, 2023 · 2 comments

Comments

@manojs8473
Copy link

manojs8473 commented Apr 25, 2023

On my system with RTX 3080 8GB and Ryzen 9 6900hx, I get around 4 FPS for feature detection(input images were at 1280x720 resolution). Is there any way to increase the inference speed ?

Btw amazing work! I found DISK to be far more robust than superpoint+superglue, SoSnet in terms of matching at large change in rotation as well as illumination .Only bottleneck being the inference time. It's far too slow to be used on a real time pipeline(in my case its 25 FPS)

@manojs8473 manojs8473 changed the title Inference speed Slow inference speed Apr 25, 2023
@jatentaki
Copy link
Collaborator

Have you tried running detection in half precision? DISK is not blazing fast at inference time because of convolutions at full image resolution. You could try further quantizing it or distilling into some more runtime-optimized backbone.

@jatentaki
Copy link
Collaborator

Also, as a note, unless you strongly subsample (threshold) the feature detections, at 1280x720 you will have so many detections that the matching step will become a bottleneck as well, possibly more significant than the detection... And if you were to subsample them strongly, maybe you can just go with smaller images in the first place?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants