🤖 AI Summary
Neural network perception uncertainties—both aleatoric and epistemic—pose significant safety risks in autonomous driving motion planning.
Method: This paper proposes a perception-confidence-driven adaptive safety control framework. It integrates evidential deep learning (EDL) to quantify perception uncertainty, constructs dynamic fuzzy sets based on evidential distributions, and embeds distributionally robust optimization (DRO) within a model predictive control (MPC) architecture to enable real-time, tunably conservative safety-constrained optimization.
Contribution/Results: The key innovation is the first direct mapping of EDL outputs to fuzzy set parameters, enabling continuous, confidence-adaptive adjustment of control conservativeness: high perception confidence prioritizes efficiency, while low confidence strengthens safety guarantees. Evaluated in CARLA simulations, the framework significantly improves decision-making robustness and achieves superior safety-efficiency trade-off performance in complex urban driving scenarios.
📝 Abstract
Safety is a critical concern in motion planning for autonomous vehicles. Modern autonomous vehicles rely on neural network-based perception, but making control decisions based on these inference results poses significant safety risks due to inherent uncertainties. To address this challenge, we present a distributionally robust optimization (DRO) framework that accounts for both aleatoric and epistemic perception uncertainties using evidential deep learning (EDL). Our approach introduces a novel ambiguity set formulation based on evidential distributions that dynamically adjusts the conservativeness according to perception confidence levels. We integrate this uncertainty-aware constraint into model predictive control (MPC), proposing the DRO-EDL-MPC algorithm with computational tractability for autonomous driving applications. Validation in the CARLA simulator demonstrates that our approach maintains efficiency under high perception confidence while enforcing conservative constraints under low confidence.