🤖 AI Summary
Joint extrinsic calibration of event cameras, LiDAR, and RGB cameras remains challenging due to severe error accumulation and lack of effective multi-modal calibration targets. Method: This paper proposes an end-to-end joint calibration framework leveraging a custom-designed 3D multimodal calibration target—integrating planar geometry, ChArUco markers, and an active LED array—to enable synchronized, single-shot observation across all three modalities. The method jointly optimizes temporal, spatial, and geometric alignment by fusing geometric feature matching, ChArUco detection, spatiotemporal activation pattern recognition in event streams, and cross-sensor synchronization. Contribution/Results: Evaluated on a newly collected autonomous driving multimodal dataset, the framework achieves significantly higher extrinsic calibration accuracy than state-of-the-art pairwise methods, demonstrates strong robustness, and effectively resolves the long-standing instability issues in event camera calibration.
📝 Abstract
We present a novel multi-modal extrinsic calibration framework designed to simultaneously estimate the relative poses between event cameras, LiDARs, and RGB cameras, with particular focus on the challenging event camera calibration. Core of our approach is a novel 3D calibration target, specifically designed and constructed to be concurrently perceived by all three sensing modalities. The target encodes features in planes, ChArUco, and active LED patterns, each tailored to the unique characteristics of LiDARs, RGB cameras, and event cameras respectively. This unique design enables a one-shot, joint extrinsic calibration process, in contrast to existing approaches that typically rely on separate, pairwise calibrations. Our calibration pipeline is designed to accurately calibrate complex vision systems in the context of autonomous driving, where precise multi-sensor alignment is critical. We validate our approach through an extensive experimental evaluation on a custom built dataset, recorded with an advanced autonomous driving sensor setup, confirming the accuracy and robustness of our method.