Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about producing the newest results with the newest codes #75

Open
HowLeeHi opened this issue Aug 17, 2023 · 0 comments
Open

Comments

@HowLeeHi
Copy link

Hi!
Thank you so much for this amazing work!

I'm trying to produce the results with your newest codes.
I followed the guidance in README and got beautiful results as follows:

  1. For 3DMatch_250_prob:
Scene	¦ prec.	¦ rec.	¦ re	¦ te	¦ samples	¦
Kitchen	¦ 0.958	¦ 0.958	¦ 2.382	¦ 0.065	¦ 449¦
Home 1	¦ 0.906	¦ 0.906	¦ 2.618	¦ 0.087	¦ 106¦
Home 2	¦ 0.761	¦ 0.761	¦ 3.299	¦ 0.085	¦ 159¦
Hotel 1	¦ 0.945	¦ 0.945	¦ 2.425	¦ 0.083	¦ 182¦
Hotel 2	¦ 0.923	¦ 0.923	¦ 2.674	¦ 0.101	¦  78¦
Hotel 3	¦ 0.808	¦ 0.808	¦ 2.450	¦ 0.074	¦  26¦
Study	¦ 0.808	¦ 0.808	¦ 2.898	¦ 0.098	¦ 234¦
MIT Lab	¦ 0.733	¦ 0.733	¦ 2.916	¦ 0.106	¦  45¦
Mean precision: 0.855: +- 0.082
Weighted precision: 0.887
Mean median RRE: 2.708: +- 0.294
Mean median RTE: 0.087: +- 0.013
Inlier ratio w_mutual: 0.467 : +- 0.045
Feature match recall w_mutual: 0.968 : +- 0.024
Inlier ratio wo_mutual: 0.358 : +- 0.039
Feature match recall wo_mutual: 0.958 : +- 0.029
  1. For 3DMatch_500_prob:
Scene	¦ prec.	¦ rec.	¦ re	¦ te	¦ samples	¦
Kitchen	¦ 0.967	¦ 0.967	¦ 1.971	¦ 0.057	¦ 449¦
Home 1	¦ 0.953	¦ 0.953	¦ 1.908	¦ 0.065	¦ 106¦
Home 2	¦ 0.780	¦ 0.780	¦ 2.552	¦ 0.081	¦ 159¦
Hotel 1	¦ 0.973	¦ 0.973	¦ 1.865	¦ 0.064	¦ 182¦
Hotel 2	¦ 0.923	¦ 0.923	¦ 2.322	¦ 0.069	¦  78¦
Hotel 3	¦ 0.808	¦ 0.808	¦ 2.752	¦ 0.046	¦  26¦
Study	¦ 0.833	¦ 0.833	¦ 2.440	¦ 0.094	¦ 234¦
MIT Lab	¦ 0.778	¦ 0.778	¦ 2.038	¦ 0.082	¦  45¦
Mean precision: 0.877: +- 0.080
Weighted precision: 0.906
Mean median RRE: 2.231: +- 0.310
Mean median RTE: 0.070: +- 0.014
Inlier ratio w_mutual: 0.510 : +- 0.044
Feature match recall w_mutual: 0.965 : +- 0.022
Inlier ratio wo_mutual: 0.407 : +- 0.043
Feature match recall wo_mutual: 0.957 : +- 0.036
  1. For 3DMatch_1000_prob:
Scene	¦ prec.	¦ rec.	¦ re	¦ te	¦ samples	¦
Kitchen	¦ 0.976	¦ 0.976	¦ 1.845	¦ 0.052	¦ 449¦
Home 1	¦ 0.972	¦ 0.972	¦ 1.938	¦ 0.067	¦ 106¦
Home 2	¦ 0.780	¦ 0.780	¦ 2.389	¦ 0.077	¦ 159¦
Hotel 1	¦ 0.978	¦ 0.978	¦ 1.826	¦ 0.068	¦ 182¦
Hotel 2	¦ 0.949	¦ 0.949	¦ 1.800	¦ 0.069	¦  78¦
Hotel 3	¦ 0.808	¦ 0.808	¦ 2.414	¦ 0.053	¦  26¦
Study	¦ 0.846	¦ 0.846	¦ 2.227	¦ 0.090	¦ 234¦
MIT Lab	¦ 0.778	¦ 0.778	¦ 2.281	¦ 0.070	¦  45¦
Mean precision: 0.886: +- 0.085
Weighted precision: 0.916
Mean median RRE: 2.090: +- 0.247
Mean median RTE: 0.068: +- 0.011
Inlier ratio w_mutual: 0.532 : +- 0.044
Feature match recall w_mutual: 0.966 : +- 0.022
Inlier ratio wo_mutual: 0.434 : +- 0.043
Feature match recall wo_mutual: 0.962 : +- 0.030
  1. For 3DMatch_2500_prob:
Scene	¦ prec.	¦ rec.	¦ re	¦ te	¦ samples	¦
Kitchen	¦ 0.971	¦ 0.971	¦ 1.770	¦ 0.049	¦ 449¦
Home 1	¦ 0.972	¦ 0.972	¦ 1.883	¦ 0.064	¦ 106¦
Home 2	¦ 0.761	¦ 0.761	¦ 2.508	¦ 0.074	¦ 159¦
Hotel 1	¦ 0.984	¦ 0.984	¦ 1.805	¦ 0.060	¦ 182¦
Hotel 2	¦ 0.936	¦ 0.936	¦ 1.800	¦ 0.061	¦  78¦
Hotel 3	¦ 0.885	¦ 0.885	¦ 2.672	¦ 0.062	¦  26¦
Study	¦ 0.850	¦ 0.850	¦ 2.032	¦ 0.078	¦ 234¦
MIT Lab	¦ 0.800	¦ 0.800	¦ 1.882	¦ 0.078	¦  45¦
Mean precision: 0.895: +- 0.079
Weighted precision: 0.915
Mean median RRE: 2.044: +- 0.327
Mean median RTE: 0.066: +- 0.009
Inlier ratio w_mutual: 0.542 : +- 0.043
Feature match recall w_mutual: 0.966 : +- 0.020
Inlier ratio wo_mutual: 0.444 : +- 0.043
Feature match recall wo_mutual: 0.961 : +- 0.028
  1. For 3DMatch_5000_prob:
Scene	¦ prec.	¦ rec.	¦ re	¦ te	¦ samples	¦
Kitchen	¦ 0.976	¦ 0.976	¦ 1.765	¦ 0.050	¦ 449¦
Home 1	¦ 0.953	¦ 0.953	¦ 1.681	¦ 0.054	¦ 106¦
Home 2	¦ 0.748	¦ 0.748	¦ 2.293	¦ 0.073	¦ 159¦
Hotel 1	¦ 0.978	¦ 0.978	¦ 1.785	¦ 0.063	¦ 182¦
Hotel 2	¦ 0.962	¦ 0.962	¦ 1.598	¦ 0.063	¦  78¦
Hotel 3	¦ 0.846	¦ 0.846	¦ 2.511	¦ 0.058	¦  26¦
Study	¦ 0.838	¦ 0.838	¦ 1.952	¦ 0.081	¦ 234¦
MIT Lab	¦ 0.756	¦ 0.756	¦ 1.748	¦ 0.075	¦  45¦
Mean precision: 0.882: +- 0.091
Weighted precision: 0.909
Mean median RRE: 1.917: +- 0.300
Mean median RTE: 0.065: +- 0.010
Inlier ratio w_mutual: 0.535 : +- 0.042
Feature match recall w_mutual: 0.969 : +- 0.019
Inlier ratio wo_mutual: 0.430 : +- 0.041
Feature match recall wo_mutual: 0.957 : +- 0.032

For a summary, the recall rates on 3DMatch are

5000 2500 1000 500 250
88.2 89.5 88.6 87.7 85.5

However, I find that you have fixed a bug and got a higher performance, as follows,
image
which is slightly better than what I produced with your newest codes. What's more, It seems like the results I produced are closer to the ones before you fixed the bug as follows:
image

Then, I try producing the results another time. This time, I got

5000 2500 1000 500 250
88.6 87.6 88.8 86.8 85.3

which is still closer to the results before fixing the bug.

To verify I got the right codes, I downloaded your released weight and test. It seems the results are correct and have few differences with the results you updated after fixing the bug.

5000 2500 1000 500 250
89.0 89.7 90.2 90.1 85.7

This has confused me for a long time. Could you help me figure out what's wrong with my training phase?
I run your codes on one 2080Ti without any modification on the config.
Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant