DRO-EDL-MPC: Evidential Deep Learning-Based Distributionally Robust Model Predictive Control for Safe Autonomous Driving

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural network perception uncertainties—both aleatoric and epistemic—pose significant safety risks in autonomous driving motion planning. Method: This paper proposes a perception-confidence-driven adaptive safety control framework. It integrates evidential deep learning (EDL) to quantify perception uncertainty, constructs dynamic fuzzy sets based on evidential distributions, and embeds distributionally robust optimization (DRO) within a model predictive control (MPC) architecture to enable real-time, tunably conservative safety-constrained optimization. Contribution/Results: The key innovation is the first direct mapping of EDL outputs to fuzzy set parameters, enabling continuous, confidence-adaptive adjustment of control conservativeness: high perception confidence prioritizes efficiency, while low confidence strengthens safety guarantees. Evaluated in CARLA simulations, the framework significantly improves decision-making robustness and achieves superior safety-efficiency trade-off performance in complex urban driving scenarios.

Technology Category

Application Category

📝 Abstract
Safety is a critical concern in motion planning for autonomous vehicles. Modern autonomous vehicles rely on neural network-based perception, but making control decisions based on these inference results poses significant safety risks due to inherent uncertainties. To address this challenge, we present a distributionally robust optimization (DRO) framework that accounts for both aleatoric and epistemic perception uncertainties using evidential deep learning (EDL). Our approach introduces a novel ambiguity set formulation based on evidential distributions that dynamically adjusts the conservativeness according to perception confidence levels. We integrate this uncertainty-aware constraint into model predictive control (MPC), proposing the DRO-EDL-MPC algorithm with computational tractability for autonomous driving applications. Validation in the CARLA simulator demonstrates that our approach maintains efficiency under high perception confidence while enforcing conservative constraints under low confidence.
Problem

Research questions and friction points this paper is trying to address.

Ensuring safety in autonomous vehicle motion planning
Addressing uncertainties in neural network-based perception
Balancing efficiency and conservativeness in control decisions
Innovation

Methods, ideas, or system contributions that make the work stand out.

DRO framework for perception uncertainties
Evidential deep learning dynamic adjustment
Uncertainty-aware MPC for autonomous driving
🔎 Similar Papers
No similar papers found.
H
Hyeongchan Ham
School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Yuseong-gu, 34141 Daejeon, Republic of Korea
Heejin Ahn
Heejin Ahn
KAIST
Control TheoryAutonomous VehiclesIntelligent Transportation Systems