Optical aberrations in autonomous driving: Physics-informed parameterized temperature scaling for neural network uncertainty calibration

📅 2024-12-18
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Optical distortion induced by automotive windshields causes data distribution shift, leading to miscalibrated model uncertainty and degraded perception reliability. To address this, we propose a physics-informed uncertainty calibration method that integrates optical priors into the calibration framework. Specifically, we encode Zernike polynomial–based optical parameters as interpretable physical inductive biases within a parametric temperature scaling architecture, establishing an explicit mapping between optical aberration magnitude and calibration strength. Leveraging a physics-informed neural network, we perform end-to-end joint modeling, achieving significant reduction in expected calibration error (ECE) on semantic segmentation tasks. Our approach is the first to enable verifiable, traceable, and end-to-end interpretable uncertainty calibration under optical degradation—establishing a novel robust and trustworthy paradigm for real-world autonomous driving deployment.

Technology Category

Application Category

📝 Abstract
'A trustworthy representation of uncertainty is desirable and should be considered as a key feature of any machine learning method' (Huellermeier and Waegeman, 2021). This conclusion of Huellermeier et al. underpins the importance of calibrated uncertainties. Since AI-based algorithms are heavily impacted by dataset shifts, the automotive industry needs to safeguard its system against all possible contingencies. One important but often neglected dataset shift is caused by optical aberrations induced by the windshield. For the verification of the perception system performance, requirements on the AI performance need to be translated into optical metrics by a bijective mapping. Given this bijective mapping it is evident that the optical system characteristics add additional information about the magnitude of the dataset shift. As a consequence, we propose to incorporate a physical inductive bias into the neural network calibration architecture to enhance the robustness and the trustworthiness of the AI target application, which we demonstrate by using a semantic segmentation task as an example. By utilizing the Zernike coefficient vector of the optical system as a physical prior we can significantly reduce the mean expected calibration error in case of optical aberrations. As a result, we pave the way for a trustworthy uncertainty representation and for a holistic verification strategy of the perception chain.
Problem

Research questions and friction points this paper is trying to address.

Calibrating neural network uncertainty for optical aberrations in autonomous driving
Addressing dataset shifts caused by windshield-induced optical aberrations
Enhancing AI robustness with physics-informed uncertainty calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Physics-informed neural network calibration for uncertainty
Zernike coefficients as optical prior for robustness
Bijective mapping translates AI performance to optical metrics
🔎 Similar Papers
No similar papers found.
D
D. Wolf
Karlsruhe Institute of Technology and Volkswagen Group, Germany
A
Alexander Braun
University of Applied Sciences Duesseldorf, Germany
Markus Ulrich
Markus Ulrich
Karlsruhe Institute of Technology (KIT)
PhotogrammetryMachine VisionComputer VisionGeodesy