Uncertainty-Aware Subset Selection for Robust Visual Explainability under Distribution Shifts

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing visual explanation methods perform well under in-distribution (ID) conditions but yield redundant, unstable, and uncertainty-sensitive interpretations under out-of-distribution (OOD) settings. To address this, we propose a training-free robust subset selection framework that— for the first time—integrates hierarchical gradient-based uncertainty estimation with submodular optimization. Our approach employs adaptive weight perturbation to guide the selection of diverse, information-rich salient regions. Crucially, it imposes no additional computational or architectural burden on the underlying model. Extensive experiments across multiple ID and OOD benchmark datasets demonstrate that our method simultaneously enhances explanation fidelity and stability in both ID and OOD regimes, effectively mitigating redundancy and instability. This work establishes a novel paradigm for trustworthy visual explanation under distributional shift.

Technology Category

Application Category

📝 Abstract
Subset selection-based methods are widely used to explain deep vision models: they attribute predictions by highlighting the most influential image regions and support object-level explanations. While these methods perform well in in-distribution (ID) settings, their behavior under out-of-distribution (OOD) conditions remains poorly understood. Through extensive experiments across multiple ID-OOD sets, we find that reliability of the existing subset based methods degrades markedly, yielding redundant, unstable, and uncertainty-sensitive explanations. To address these shortcomings, we introduce a framework that combines submodular subset selection with layer-wise, gradient-based uncertainty estimation to improve robustness and fidelity without requiring additional training or auxiliary models. Our approach estimates uncertainty via adaptive weight perturbations and uses these estimates to guide submodular optimization, ensuring diverse and informative subset selection. Empirical evaluations show that, beyond mitigating the weaknesses of existing methods under OOD scenarios, our framework also yields improvements in ID settings. These findings highlight limitations of current subset-based approaches and demonstrate how uncertainty-driven optimization can enhance attribution and object-level interpretability, paving the way for more transparent and trustworthy AI in real-world vision applications.
Problem

Research questions and friction points this paper is trying to address.

Improves robustness of visual explanations under distribution shifts
Addresses redundancy and instability in subset selection methods
Enhances attribution fidelity without additional training or models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines submodular selection with gradient-based uncertainty estimation
Uses adaptive weight perturbations to estimate uncertainty
Guides submodular optimization with uncertainty for robust explanations
🔎 Similar Papers
No similar papers found.