One target to align them all: LiDAR, RGB and event cameras extrinsic calibration for Autonomous Driving

📅 2025-11-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Joint extrinsic calibration of event cameras, LiDAR, and RGB cameras remains challenging due to severe error accumulation and lack of effective multi-modal calibration targets. Method: This paper proposes an end-to-end joint calibration framework leveraging a custom-designed 3D multimodal calibration target—integrating planar geometry, ChArUco markers, and an active LED array—to enable synchronized, single-shot observation across all three modalities. The method jointly optimizes temporal, spatial, and geometric alignment by fusing geometric feature matching, ChArUco detection, spatiotemporal activation pattern recognition in event streams, and cross-sensor synchronization. Contribution/Results: Evaluated on a newly collected autonomous driving multimodal dataset, the framework achieves significantly higher extrinsic calibration accuracy than state-of-the-art pairwise methods, demonstrates strong robustness, and effectively resolves the long-standing instability issues in event camera calibration.

Technology Category

Application Category

📝 Abstract
We present a novel multi-modal extrinsic calibration framework designed to simultaneously estimate the relative poses between event cameras, LiDARs, and RGB cameras, with particular focus on the challenging event camera calibration. Core of our approach is a novel 3D calibration target, specifically designed and constructed to be concurrently perceived by all three sensing modalities. The target encodes features in planes, ChArUco, and active LED patterns, each tailored to the unique characteristics of LiDARs, RGB cameras, and event cameras respectively. This unique design enables a one-shot, joint extrinsic calibration process, in contrast to existing approaches that typically rely on separate, pairwise calibrations. Our calibration pipeline is designed to accurately calibrate complex vision systems in the context of autonomous driving, where precise multi-sensor alignment is critical. We validate our approach through an extensive experimental evaluation on a custom built dataset, recorded with an advanced autonomous driving sensor setup, confirming the accuracy and robustness of our method.
Problem

Research questions and friction points this paper is trying to address.

Simultaneously calibrating LiDAR, RGB, and event cameras for autonomous driving
Developing a unified 3D target for multi-modal sensor calibration
Enabling one-shot extrinsic calibration instead of pairwise methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel 3D calibration target for multi-modal sensors
One-shot joint extrinsic calibration replaces pairwise methods
Target encodes planes, ChArUco and LED patterns
🔎 Similar Papers
No similar papers found.
A
Andrea Bertogalli
DEIB, Politecnico di Milano, Milan, IT
Giacomo Boracchi
Giacomo Boracchi
Associate Professor, Politecnico di Milano, DEIB
Image ProcessingComputer VisionAnomaly DetectionChange Detection
L
Luca Magri
DEIB, Politecnico di Milano, Milan, IT