You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for releasing the amazing codes. Is it possible to release the evaluation codes that reproduce the Chamfer distance (GenRe, 0.106) reported in Table 1 of the GenRe paper? I followed the setting described in the paper but got a much worse Chamfer distance (GenRe, 0.30) with the released pre-trained model.
The text was updated successfully, but these errors were encountered:
Hi @xianyongqin, I also have worse evaluation by using their evaluation code from pix3d. For Lamp, the paper has 0.124 but mine is 0.297. Did you figure out why this happen? Thanks a lot!
referencing to issue #73.
For CD discrepancies, several things might be helpful for tracing the issue:
the gt voxel should be surface only.
the prediction should at least match the given mask, at the test viewing angle.
for results in table 1, we searched for different thresholding values, usually from 0.3 to 0.5 with a 0.05 step size, and this is done for each class. Though this is not ideal, I think we did this since previous baselines reported their numbers in a similar fashion.
Thanks for releasing the amazing codes. Is it possible to release the evaluation codes that reproduce the Chamfer distance (GenRe, 0.106) reported in Table 1 of the GenRe paper? I followed the setting described in the paper but got a much worse Chamfer distance (GenRe, 0.30) with the released pre-trained model.
The text was updated successfully, but these errors were encountered: