🤖 AI Summary
Deep neural networks often rely on spurious features, compromising model reliability and interpretability. To address this, we propose a self-supervised, multi-stage feature masking method that operates without additional annotations: it adaptively masks deep-layer encoded features at the sample level to encourage models to attend to genuine discriminative features. Our key innovation lies in embedding the masking mechanism within a multi-scale feature space and dynamically optimizing the masking strategy via self-supervised signals. Evaluated on ImageNet100, CUB-200, and other benchmarks, our approach consistently improves both interpretability—measured by the Energy Pointing Game (+8.2%)—and classification accuracy (+1.5%). The gains are robust across diverse architectures, including ResNet and Vision Transformers, and generalize effectively to out-of-distribution settings and fine-grained visual recognition tasks.
📝 Abstract
It has been observed that deep neural networks (DNNs) often use both genuine as well as spurious features. In this work, we propose "Amending Inherent Interpretability via Self-Supervised Masking" (AIM), a simple yet interestingly effective method that promotes the network's utilization of genuine features over spurious alternatives without requiring additional annotations. In particular, AIM uses features at multiple encoding stages to guide a self-supervised, sample-specific feature-masking process. As a result, AIM enables the training of well-performing and inherently interpretable models that faithfully summarize the decision process. We validate AIM across a diverse range of challenging datasets that test both out-of-distribution generalization and fine-grained visual understanding. These include general-purpose classification benchmarks such as ImageNet100, HardImageNet, and ImageWoof, as well as fine-grained classification datasets such as Waterbirds, TravelingBirds, and CUB-200. AIM demonstrates significant dual benefits: interpretability improvements, as measured by the Energy Pointing Game (EPG) score, and accuracy gains over strong baselines. These consistent gains across domains and architectures provide compelling evidence that AIM promotes the use of genuine and meaningful features that directly contribute to improved generalization and human-aligned interpretability.