-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FIXED] Bug: properly handling batched data in the ego motion loss and Chamfer distance computation #5
Comments
Hi @tpzou, thanks for catching this, it is indeed a bug that is a result of the legacy issues (in the earlier versions ME used the last column to store the indices). This bug means that our ego motion loss was applied on points, which always lie on a specific line (the line is different for each element in the batch due to the batch index being treated as a coordinate). Theoretically, our formulation of the ego loss would allow us to use random 3D points (as the points are transformed with both GT and EST, transformation parameters). Therefore, this bug should not have a significant effect on the performance and if anything should actually improve it. I have uploaded an updated Thank you again, Best, Zan |
Hi @tpzou, the model is still training but after fixing this bug the ego_loss as well as the total loss are lower both on training and validation data. I will update the results once the model is fully converged... |
Hi @zgojcic , |
Hi @tpzou, we have never tried that, but I can have a quick look in the next days/weeks. Otherwise you can just increase the accumulation iterations parameter if you would like to have a bigger batch size. |
Hi, @zgojcic |
Sorry for the extremely late response, I have somehow missed this post. Indeed, this will affect some cases and one should iterate over the batch. I think that the difference will be small, but I will update the model and post the result here. |
hi, @zgojcic . Is there a mistake at loss.py line 83? The first column of p_s_temp[mask_temp,:3] is the batch id... Why should transform it?
The text was updated successfully, but these errors were encountered: