Skip to content

Latest commit

 

History

History
53 lines (37 loc) · 3.24 KB

README.md

File metadata and controls

53 lines (37 loc) · 3.24 KB

MDPCalib

arXiv | IEEE Xplore | Website | Video

This repository is the official implementation of the paper:

Automatic Target-Less Camera-LiDAR Calibration from Motion and Deep Point Correspondences

Kürsat Petek*, Niclas Vödisch*, Johannes Meyer, Daniele Cattaneo, Abhinav Valada, and Wolfram Burgard.
*Equal contribution.

IEEE Robotics and Automation Letters, vol. 9, issue 11, pp. 9978-9985, November 2024

Overview of MDPCalib approach

If you find our work useful, please consider citing our paper:

@article{petek2024mdpcalib,
  author={Petek, Kürsat and Vödisch, Niclas and Meyer, Johannes and Cattaneo, Daniele and Valada, Abhinav and Burgard, Wolfram},
  journal={IEEE Robotics and Automation Letters}, 
  title={Automatic Target-Less Camera-LiDAR Calibration From Motion and Deep Point Correspondences}, 
  year={2024},
  volume={9},
  number={11},
  pages={9978-9985}
}

📔 Abstract

Sensor setups of robotic platforms commonly include both camera and LiDAR as they provide complementary information. However, fusing these two modalities typically requires a highly accurate calibration between them. In this paper, we propose MDPCalib which is a novel method for camera-LiDAR calibration that requires neither human supervision nor any specific target objects. Instead, we utilize sensor motion estimates from visual and LiDAR odometry as well as deep learning-based 2D-pixel-to-3D-point correspondences that are obtained without in-domain retraining. We represent the camera-LiDAR calibration as a graph optimization problem and minimize the costs induced by constraints from sensor motion and point correspondences. In extensive experiments, we demonstrate that our approach yields highly accurate extrinsic calibration parameters and is robust to random initialization. Additionally, our approach generalizes to a wide range of sensor setups, which we demonstrate by employing it on various robotic platforms including a self-driving perception car, a quadruped robot, and a UAV.

👩‍💻 Code

For licensing reasons, we will release the code upon acceptance of the used point correspondence network CMRNext.

👩‍⚖️ License

For academic usage, the code is released under the GPLv3 license. For any commercial purpose, please contact the authors.

🙏 Acknowledgment

This work was funded by the German Research Foundation (DFG) Emmy Noether Program grant No 468878300 and an academic grant from NVIDIA.

DFG logo