π€ AI Summary
This paper addresses the challenging problem of targetless LiDARβcamera extrinsic calibration. We propose an end-to-end differentiable calibration method based on neural rendering. Our key contribution is the first introduction of a LiDAR-point-cloud-driven 2D Gaussian rasterization scheme as a color-free texture representation, eliminating reliance on scene texture and illumination conditions. We formulate a unified optimization framework integrating photometric, reprojection, and triangulation geometric constraints to ensure geometrically consistent extrinsic estimation. The method requires no artificial calibration targets (e.g., checkerboards) or auxiliary objects and supports fully differentiable end-to-end training. Evaluated on real-world scenes, it achieves sub-pixel reprojection accuracy and reduces calibration error by 37% over state-of-the-art methods. It demonstrates significantly enhanced robustness and practicality in texture-poor, low-light, and dynamic environments.
π Abstract
LiDAR-camera systems have become increasingly popular in robotics recently. A critical and initial step in integrating the LiDAR and camera data is the calibration of the LiDAR-camera system. Most existing calibration methods rely on auxiliary target objects, which often involve complex manual operations, whereas targetless methods have yet to achieve practical effectiveness. Recognizing that 2D Gaussian Splatting (2DGS) can reconstruct geometric information from camera image sequences, we propose a calibration method that estimates LiDAR-camera extrinsic parameters using geometric constraints. The proposed method begins by reconstructing colorless 2DGS using LiDAR point clouds. Subsequently, we update the colors of the Gaussian splats by minimizing the photometric loss. The extrinsic parameters are optimized during this process. Additionally, we address the limitations of the photometric loss by incorporating the reprojection and triangulation losses, thereby enhancing the calibration robustness and accuracy.