🤖 AI Summary
This work addresses the challenge in existing visual explanation methods, where redundant features often produce cluttered saliency maps that struggle to balance conciseness and decision fidelity. To overcome this limitation, the authors propose a novel gradient-free explanation framework that, for the first time, adapts Delta Debugging—a strategy from software engineering—to the domain of visual interpretability. By dynamically analyzing interactions among representation units in the classifier head, the method systematically identifies the minimal sufficient subset of units required to preserve the original prediction. Integrating representation activation analysis, combinatorial testing, and CAM-based saliency map generation, the approach significantly outperforms current CAM variants across multiple benchmarks, yielding more faithful explanations and improved target localization accuracy.
📝 Abstract
We introduce a gradient-free framework for identifying minimal, sufficient, and decision-preserving explanations in vision models by isolating the smallest subset of representational units whose joint activation preserves predictions. Unlike existing approaches that aggregate all units, often leading to cluttered saliency maps, our approach, DD-CAM, identifies a 1-minimal subset whose joint activation suffices to preserve the prediction (i.e., removing any unit from the subset alters the prediction). To efficiently isolate minimal sufficient subsets, we adapt delta debugging, a systematic reduction strategy from software debugging, and configure its search strategy based on unit interactions in the classifier head: testing individual units for models with non-interacting units and testing unit combinations for models in which unit interactions exist. We then generate minimal, prediction-preserving saliency maps that highlight only the most essential features. Our experimental evaluation demonstrates that our approach can produce more faithful explanations and achieve higher localization accuracy than the state-of-the-art CAM-based approaches.