🤖 AI Summary
To address the limitations of conventional LiDAR–camera extrinsic calibration—namely, reliance on artificial calibration targets and multiple data acquisitions—this paper proposes a single-shot, targetless, line-feature-driven calibration method. Leveraging geometric constraints of common perpendicularity and parallelism among 3D lines represented in Plücker coordinates, we achieve, for the first time, theoretical decoupling of rotational and translational parameter estimation. We rigorously prove that only three non-parallel corresponding 3D–2D line pairs are sufficient for unique solution recovery, overcoming the traditional dependence on point- or plane-based features. The method formulates and minimizes line reprojection error, incorporates degeneracy analysis, and validates robustness via Monte Carlo simulations. Evaluated on a custom multi-scene dataset, our approach achieves mean rotational and translational errors of 0.12° and 1.8 cm, respectively—substantially outperforming existing targetless methods.
📝 Abstract
Accurate LiDAR-Camera (LC) calibration is challenging but crucial for autonomous systems and robotics. In this paper, we propose two single-shot and target-less algorithms to estimate the calibration parameters between LiDAR and camera using line features. The first algorithm constructs line-to-line constraints by defining points-to-line projection errors and minimizes the projection error. The second algorithm (PLK-Calib) utilizes the co-perpendicular and co-parallel geometric properties of lines in Pl""ucker (PLK) coordinate, and decouples the rotation and translation into two constraints, enabling more accurate estimates. Our degenerate analysis and Monte Carlo simulation indicate that three nonparallel line pairs are the minimal requirements to estimate the extrinsic parameters. Furthermore, we collect an LC calibration dataset with varying extrinsic under three different scenarios and use it to evaluate the performance of our proposed algorithms.