Enhancing System Self-Awareness and Trust of AI: A Case Study in Trajectory Prediction and Planning

📅 2025-04-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In autonomous driving trajectory prediction, data-driven AI methods suffer from sensitivity to distributional shifts due to their reliance on the i.i.d. assumption and lack interpretability and trustworthiness owing to their black-box nature. To address these challenges, this paper proposes TrustMHE—a novel framework integrating AI-driven uncertainty-aware anomaly detection with control-oriented moving horizon estimation (MHE), establishing a closed-loop “monitor–reflect–intervene” paradigm for trust enhancement. It introduces nonlinear MHE-based state estimation and a closed-loop collaborative verification mechanism, enabling real-time distribution shift detection, error propagation suppression, and safety-critical re-planning. Evaluated across three representative traffic simulation scenarios, TrustMHE demonstrates significant improvements in prediction robustness and decision traceability. The framework provides a principled pathway toward deploying trustworthy AI in autonomous vehicle planning, bridging the gap between learning-based prediction and control-theoretic safety guarantees.

Technology Category

Application Category

📝 Abstract
In the trajectory planning of automated driving, data-driven statistical artificial intelligence (AI) methods are increasingly established for predicting the emergent behavior of other road users. While these methods achieve exceptional performance in defined datasets, they usually rely on the independent and identically distributed (i.i.d.) assumption and thus tend to be vulnerable to distribution shifts that occur in the real world. In addition, these methods lack explainability due to their black box nature, which poses further challenges in terms of the approval process and social trustworthiness. Therefore, in order to use the capabilities of data-driven statistical AI methods in a reliable and trustworthy manner, the concept of TrustMHE is introduced and investigated in this paper. TrustMHE represents a complementary approach, independent of the underlying AI systems, that combines AI-driven out-of-distribution detection with control-driven moving horizon estimation (MHE) to enable not only detection and monitoring, but also intervention. The effectiveness of the proposed TrustMHE is evaluated and proven in three simulation scenarios.
Problem

Research questions and friction points this paper is trying to address.

Addressing vulnerability of AI to real-world distribution shifts
Improving explainability of black-box AI trajectory prediction methods
Enhancing trust via detection, monitoring, and intervention capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines AI-driven out-of-distribution detection
Integrates control-driven moving horizon estimation
Enables detection, monitoring, and intervention capabilities