Rashomon in the Streets: Explanation Ambiguity in Scene Understanding

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In safety-critical tasks such as autonomous driving motion prediction, eXplainable Artificial Intelligence (XAI) confronts the Rashomon effect—multiple high-performing models yield substantially divergent explanations for identical predictions, undermining explanation credibility and uniqueness. Method: This work presents the first empirical quantification of the Rashomon effect in real-world driving scenarios. We model scenes using Qualitative eXplainable Graphs (QXGs), integrate paired gradient-boosted trees with graph neural networks, and systematically assess cross-model and cross-architecture explanation consistency via feature attribution methods. Contribution/Results: Despite comparable predictive performance, models exhibit pervasive explanation divergence—both within and across architectures—demonstrating that explanation ambiguity is an intrinsic property of the task, not an artifact of modeling bias. Our findings expose fundamental reliability limits of XAI in autonomous driving and establish a novel benchmark for explainability evaluation and robust XAI design.

Technology Category

Application Category

📝 Abstract
Explainable AI (XAI) is essential for validating and trusting models in safety-critical applications like autonomous driving. However, the reliability of XAI is challenged by the Rashomon effect, where multiple, equally accurate models can offer divergent explanations for the same prediction. This paper provides the first empirical quantification of this effect for the task of action prediction in real-world driving scenes. Using Qualitative Explainable Graphs (QXGs) as a symbolic scene representation, we train Rashomon sets of two distinct model classes: interpretable, pair-based gradient boosting models and complex, graph-based Graph Neural Networks (GNNs). Using feature attribution methods, we measure the agreement of explanations both within and between these classes. Our results reveal significant explanation disagreement. Our findings suggest that explanation ambiguity is an inherent property of the problem, not just a modeling artifact.
Problem

Research questions and friction points this paper is trying to address.

Quantifying explanation ambiguity in action prediction
Evaluating Rashomon effect across interpretable and complex models
Assessing explanation reliability in autonomous driving systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Qualitative Explainable Graphs for symbolic scene representation
Trains Rashomon sets with gradient boosting and Graph Neural Networks
Measures explanation agreement using feature attribution methods
🔎 Similar Papers
No similar papers found.