🤖 AI Summary
To address the significant degradation in long-range (>100 m) perception performance caused by sensor calibration misalignment in autonomous driving, this paper proposes the first end-to-end multi-task learning framework that jointly performs cross-modal sensor angular deviation detection, calibration uncertainty quantification (achieving calibration error < 0.15°), and input-data-space self-correction. The method requires no additional hardware or re-calibration, integrating uncertainty-aware calibration with a differentiable geometric self-correction module to enable real-time inference. Experimental results demonstrate a 32% improvement in angular deviation detection accuracy and a 47% reduction in false positives under long-range conditions. Moreover, after self-correction, BEV object detection achieves a 21.6% increase in mAP, substantially enhancing model generalization and robustness against calibration drift.
📝 Abstract
Advances in machine learning algorithms for sensor fusion have significantly improved the detection and prediction of other road users, thereby enhancing safety. However, even a small angular displacement in the sensor's placement can cause significant degradation in output, especially at long range. In this paper, we demonstrate a simple yet generic and efficient multi-task learning approach that not only detects misalignment between different sensor modalities but is also robust against them for long-range perception. Along with the amount of misalignment, our method also predicts calibrated uncertainty, which can be useful for filtering and fusing predicted misalignment values over time. In addition, we show that the predicted misalignment parameters can be used for self-correcting input sensor data, further improving the perception performance under sensor misalignment.