-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Superpoint vs SIFT? #7
Comments
I noticed that when using SIFT to extract feature points from my scenes, it can extract around 7000-10000 points, while SuperPoint only extracts around 2000 points. I'm wondering if this difference in the number of feature points could be a possible reason for the issue I'm experiencing. |
According to my experience, 2000 keypoints should be enough to obtain valid matches. Which method and what threshold value you are using? I guess it could be the case that the threshold is set to be too small that all matches are removed. |
Thank you for your response! I've tried everything, and none of the methods seem to work, except for the original version of SIFT. For instance, when using SuperPoint and SuperGlue with unchanged parameters, an error occurs. Even when using only SuperPoint and adopting the sift_default matching method, the same error persists. Therefore, I suspect the issue may be related to too few feature points. |
Yes, my default parameters are tuned for COLMAP's SIFT feature and exhaustive matching. Using other features like SuperPoint and matchers like SuperGlue should require tuning the parameters again. How many images do you use in your dataset? I think you can use a subset of it (maybe ~20) images and use visualization tools like the one used in the notebook to see what could be the proper threshold. You can also print the |
My dataset mostly consists of around 50 images each. Which specific parameter are you referring to as the threshold? I've reviewed the notebook you provided, and in steps 3 and 4, the match_features.py script is executed. When I individually debugged this Python file, I encountered the reported error. Currently, I haven't identified the threshold parameter you mentioned; I only found parameters specific to different methods in the options folder. |
I see, so you met with the error at step 4? I thought the error happened in later steps. Could you share one dataset with me? I could do some debugging this weekend. At the same time, you could try this doppelgangers repo for the ambiguation problem, which should be the SOTA now. |
Sure, how do I upload the data? Do you have WeChat? Thank you very much! I'll go and take a look at this new paper. |
You can upload the data to google drive / baidu netdisk and share the link with me. Otherwise, you can send me the data through wechat. You can just send me an email with your wecaht id. Please also share the script and config file for me to reproduce the error message and pinpoint the issue. |
Thank you, wechat id and dataset send to your email. |
Impressive code! When running it with my own dataset, I utilized SuperPoint to extract feature points and proceeded with matching. However, I encountered an issue with too few feature points, resulting in unsuccessful matching. Have you encountered this? I'm curious if the limited number of pictures in my dataset might be a contributing factor.
The text was updated successfully, but these errors were encountered: