🤖 AI Summary
Black-box explanation methods often generate out-of-distribution samples via input occlusion, and their occlusion values rely heavily on domain-specific heuristics—compromising explanation robustness and generalizability. To address this, we propose the Activation-Deactivation (AD) paradigm: instead of perturbing inputs, AD modularly deactivates target neuron pathways during forward propagation, thereby avoiding distributional shift, eliminating prior assumptions, and theoretically preserving the original decision function. Building upon AD, we design ConvAD—a plug-and-play module that achieves fine-grained deactivation via feature-map masking and path-wise gating. Extensive evaluations across multiple models and datasets demonstrate that AD improves explanation robustness by up to 62.5% over occlusion-based methods, yields more compact explanations, and exhibits significantly slower confidence decay—confirming its superiority in fidelity, stability, and interpretability.
📝 Abstract
Black-box explainability methods are popular tools for explaining the decisions of image classifiers. A major drawback of these tools is their reliance on mutants obtained by occluding parts of the input, leading to out-of-distribution images. This raises doubts about the quality of the explanations. Moreover, choosing an appropriate occlusion value often requires domain knowledge. In this paper we introduce a novel forward-pass paradigm Activation-Deactivation (AD), which removes the effects of occluded input features from the model's decision-making by switching off the parts of the model that correspond to the occlusions. We introduce ConvAD, a drop-in mechanism that can be easily added to any trained Convolutional Neural Network (CNN), and which implements the AD paradigm. This leads to more robust explanations without any additional training or fine-tuning. We prove that the ConvAD mechanism does not change the decision-making process of the network. We provide experimental evaluation across several datasets and model architectures. We compare the quality of AD-explanations with explanations achieved using a set of masking values, using the proxies of robustness, size, and confidence drop-off. We observe a consistent improvement in robustness of AD explanations (up to 62.5%) compared to explanations obtained with occlusions, demonstrating that ConvAD extracts more robust explanations without the need for domain knowledge.