Activation-Deactivation: A General Framework for Robust Post-hoc Explainable AI

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Black-box explanation methods often generate out-of-distribution samples via input occlusion, and their occlusion values rely heavily on domain-specific heuristics—compromising explanation robustness and generalizability. To address this, we propose the Activation-Deactivation (AD) paradigm: instead of perturbing inputs, AD modularly deactivates target neuron pathways during forward propagation, thereby avoiding distributional shift, eliminating prior assumptions, and theoretically preserving the original decision function. Building upon AD, we design ConvAD—a plug-and-play module that achieves fine-grained deactivation via feature-map masking and path-wise gating. Extensive evaluations across multiple models and datasets demonstrate that AD improves explanation robustness by up to 62.5% over occlusion-based methods, yields more compact explanations, and exhibits significantly slower confidence decay—confirming its superiority in fidelity, stability, and interpretability.

Technology Category

Application Category

📝 Abstract
Black-box explainability methods are popular tools for explaining the decisions of image classifiers. A major drawback of these tools is their reliance on mutants obtained by occluding parts of the input, leading to out-of-distribution images. This raises doubts about the quality of the explanations. Moreover, choosing an appropriate occlusion value often requires domain knowledge. In this paper we introduce a novel forward-pass paradigm Activation-Deactivation (AD), which removes the effects of occluded input features from the model's decision-making by switching off the parts of the model that correspond to the occlusions. We introduce ConvAD, a drop-in mechanism that can be easily added to any trained Convolutional Neural Network (CNN), and which implements the AD paradigm. This leads to more robust explanations without any additional training or fine-tuning. We prove that the ConvAD mechanism does not change the decision-making process of the network. We provide experimental evaluation across several datasets and model architectures. We compare the quality of AD-explanations with explanations achieved using a set of masking values, using the proxies of robustness, size, and confidence drop-off. We observe a consistent improvement in robustness of AD explanations (up to 62.5%) compared to explanations obtained with occlusions, demonstrating that ConvAD extracts more robust explanations without the need for domain knowledge.
Problem

Research questions and friction points this paper is trying to address.

Addresses unreliable explanations from occlusion-based black-box AI methods
Eliminates need for domain knowledge in selecting occlusion values
Improves robustness of post-hoc explanations for image classifiers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Switches off model parts corresponding to occluded features
Implements Activation-Deactivation paradigm for CNNs
Provides robust explanations without additional training
🔎 Similar Papers
No similar papers found.