π€ AI Summary
This work addresses the limitations of existing attribution methods for Vision Transformers (ViTs), which rely on input perturbations and often fail to accurately identify image regions with genuine causal influence on predictions. To overcome this, the authors propose a causal attribution mechanism based on interventions in intermediate layer activations. Specifically, patch-level activations from a source image are embedded into a neutral target context, and the resulting change in the target class score is used as a direct measure of each patchβs causal effect within the modelβs internal representation. By operating in the activation space rather than the input space, this approach circumvents the spatial blurring caused by high-level global mixing in ViTs, thereby significantly improving both the fidelity and localization accuracy of attribution maps. Extensive experiments across multiple ViT architectures and standard evaluation metrics demonstrate consistent superiority over current attribution techniques.
π Abstract
Attribution methods for Vision Transformers (ViTs) aim to identify image regions that influence model predictions, but producing faithful and well-localized attributions remains challenging. Existing gradient-based and perturbation-based techniques often fail to isolate the causal contribution of internal representations associated with individual image patches. The key challenge is that class-relevant evidence is formed through interactions between patch tokens across layers, and input-level perturbations can be poor proxies for patch importance, since they may fail to reconstruct the internal evidence actually used by the model. We propose Causal Attribution via Activation Patching (CAAP), which estimates the contribution of individual image patches to the ViT's prediction by directly intervening on internal activations rather than using learned masks or synthetic perturbation patterns. For each patch, CAAP inserts the corresponding source-image activations into a neutral target context over an intermediate range of layers and uses the resulting target-class score as the attribution signal. The resulting attribution map reflects the causal effect of patch-associated internal representations on the model's prediction. The causal intervention serves as a principled measure of patch influence by capturing class-relevant evidence after initial representation formation, while avoiding late-layer global mixing that can reduce spatial specificity. Across multiple ViT backbones and standard metrics, CAAP significantly outperforms existing methods and produces more faithful and localized attributions.