Uncertainty Gating for Cost-Aware Explainable Artificial Intelligence

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability and high computational cost of existing post-hoc explanation methods in regions near ambiguous decision boundaries, which undermine explanation reliability. To tackle this challenge, the study proposes leveraging epistemic uncertainty—estimated via Bayesian deep learning—as a low-cost proxy for explanation reliability, dynamically determining when to generate explanations to optimize the trade-off between quality and computational cost under resource constraints. The approach is systematically evaluated across multiple XAI techniques (e.g., LIME, SHAP) and model architectures, revealing a strong negative correlation between epistemic uncertainty and explanation stability. Extensive experiments on four tabular datasets, five model types, and four XAI methods validate this relationship, while additional image classification tasks demonstrate the method’s generalizability.
📝 Abstract
Post-hoc explanation methods are widely used to interpret black-box predictions, but their generation is often computationally expensive and their reliability is not guaranteed. We propose epistemic uncertainty as a low-cost proxy for explanation reliability: high epistemic uncertainty identifies regions where the decision boundary is poorly defined and where explanations become unstable and unfaithful. This insight enables two complementary use cases: `improving worst-case explanations' (routing samples to cheap or expensive XAI methods based on expected explanation reliability), and `recalling high-quality explanations' (deferring explanation generation for uncertain samples under constrained budget). Across four tabular datasets, five diverse architectures, and four XAI methods, we observe a strong negative correlation between epistemic uncertainty and explanation stability. Further analysis shows that epistemic uncertainty distinguishes not only stable from unstable explanations, but also faithful from unfaithful ones. Experiments on image classification confirm that our findings generalize beyond tabular data.
Problem

Research questions and friction points this paper is trying to address.

post-hoc explanation
explanation reliability
epistemic uncertainty
explainable AI
explanation stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

epistemic uncertainty
cost-aware XAI
explanation reliability
uncertainty gating
post-hoc explanation
🔎 Similar Papers
No similar papers found.
Georgii Mikriukov
Georgii Mikriukov
Leibniz Institute for Agricultural Engineering and Bioeconomy (ATB)
Grégoire Montavon
Grégoire Montavon
Professor, Charité / BIFOLD
Explainable AIMachine LearningData Science
M
Marina M. -C. Höhne
Leibniz Institute for Agricultural Engineering and Bioeconomy (ATB), Potsdam, Germany; University of Potsdam – Department of Computational Science, Potsdam, Germany