-
Notifications
You must be signed in to change notification settings - Fork 952
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed optimization bug under degeneracy #96
base: master
Are you sure you want to change the base?
Conversation
I tried out your PR and it helped with wild pose divergences indoors (VLP-16). They still happen (especially in confined spaces), but less often. |
Glad it helped :). |
Indeed, thank you :) Yup, this implementation trusts the IMU too little. It only uses the gyro/accelerometer to unwarp the point cloud, even though the IMU provides a stable pitch/roll reference with respect to gravity which does not drift. |
Actually, the |
Hmmm, are you sure it's just scaling? The angle inside the sines/cosines a couple of lines below is also multiplied by If the costs |
I think in the original paper, the point scan time is not optimized here. The scan time of the point is corrected before the optimization in |
Hi @KitKat7, |
@tungdanganh You need to use "transposeInPlace()". https://eigen.tuxfamily.org/dox/group__TutorialMatrixArithmetic.html |
Hi, @KitKat7, |
According to the paper, "J. Zhang, M. Kaess and S. Singh, On degeneracy of optimization-based state estimation problems, 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, 2016", when degeneracy happens, the update should be remapped by "matV.inverse() * matV2".
However, in the original paper Vu and Vf are transposed matrix of eigenvalues, which should correspond to matV2 and matV in the current implementation a4c364a.
Thus, to update the states towards well-conditioned directions only, "matV.transpose().inverse() * matV2.transpose()" should be used, or equivalently "matV2 * matV.inverse()".