Explaining Unreliable Perception in Automated Driving: A Fuzzy-based Monitoring Approach

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of explainability in ML-based perception components’ prediction errors in autonomous driving—and the consequent difficulty in establishing system-level safety assurance—this paper proposes a fuzzy logic–based explainable runtime monitor. The method uniquely integrates fuzzy inference with explainability guarantees by modeling how environmental factors (e.g., illumination, weather) affect perception reliability, enabling real-time failure detection and generating human-interpretable causal explanations. Leveraging naturalistic driving data for empirical validation and Assurance Case engineering, it supports traceable verification from unit-level ML correctness evidence to system-level safety claims. Experiments on real-world datasets demonstrate that, compared to state-of-the-art monitors, our approach significantly reduces hazardous scenario occurrence while maintaining high task availability, and explicitly identifies multiple conditions under which reliable operation is guaranteed.

Technology Category

Application Category

📝 Abstract
Autonomous systems that rely on Machine Learning (ML) utilize online fault tolerance mechanisms, such as runtime monitors, to detect ML prediction errors and maintain safety during operation. However, the lack of human-interpretable explanations for these errors can hinder the creation of strong assurances about the system's safety and reliability. This paper introduces a novel fuzzy-based monitor tailored for ML perception components. It provides human-interpretable explanations about how different operating conditions affect the reliability of perception components and also functions as a runtime safety monitor. We evaluated our proposed monitor using naturalistic driving datasets as part of an automated driving case study. The interpretability of the monitor was evaluated and we identified a set of operating conditions in which the perception component performs reliably. Additionally, we created an assurance case that links unit-level evidence of extit{correct} ML operation to system-level extit{safety}. The benchmarking demonstrated that our monitor achieved a better increase in safety (i.e., absence of hazardous situations) while maintaining availability (i.e., ability to perform the mission) compared to state-of-the-art runtime ML monitors in the evaluated dataset.
Problem

Research questions and friction points this paper is trying to address.

Detects ML prediction errors in autonomous driving systems
Provides human-interpretable explanations for perception reliability
Links ML operation evidence to system-level safety assurance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fuzzy-based monitor for ML perception errors
Human-interpretable explanations for reliability
Runtime safety and availability enhancement
🔎 Similar Papers
No similar papers found.
A
Aniket Salvi
Engineering Resilient Cognitive Systems, Technical University of Munich, Munich, Germany
G
Gereon Weiss
Automation Systems, Fraunhofer Institute for Cognitive Systems IKS, Munich, Germany
Mario Trapp
Mario Trapp
Fraunhofer
Resilient Cognitive SystemsSoftware EngineeringSafety EngineeringModel-Based Engineering