-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trying to reproduce results #3
Comments
Hi @guyrose3 , The evaluation protocol of the Brown dataset can be find in a lot of papers, the ROC curve is obtained by varying the distance threshold, you can generate the threshold densely to avoid interpolation. |
hi @yuruntian, |
Hi @guyrose3 |
Hello, I've been researching L2-NET recently and I've encountered related problems. Did you solve the problem? Can you send me a copy of your tensorflow code? If you can, will be greatly appreciated @guyrose3 |
Hi @yuruntian, I read your paper and found it very interseting.
I'm trying to reproduce your results using tensorflow.
specifically,I'm trying to take the model trained on HPatches(with augmentation) and test it on Brown dataset.
I ported the weights from matconvnet into tensorflow,and followed the exact architecture.
The tensorflow descriptor works quite well for feature matching tasks, so I'm guessing I plugged in the weights correctly.
I also followed the Brown evaluation method and report FPN @ recall=0.95
in this case, however, I'm getting quite different results:
20% FPN @ recall=0.95 on liberty(vs. 3.2% you reported in the paper)
So I guess I must be doing something bad in the evaluation code.
Can you share or point me to the evaluation code you were using?
Also, can you elaborate more on how you measured yourself on brown dataset (patch size, special tweaks you had to do, etc..)?
Thanks,
Guy
The text was updated successfully, but these errors were encountered: