Interpreting ResNet-based CLIP via Neuron-Attention Decomposition

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the limited interpretability of CLIP-ResNet models by proposing Neuron-Attention Decomposition (NAD), a method that models the pairing of individual neurons with specific attention heads as interpretable computational units. We show that each such neuron–attention pair corresponds to a semantic direction vector in the image–text joint embedding space, explicitly linking to textual concepts; exhibits strong activation sparsity; and naturally enables sub-concept decomposition. Leveraging this insight, we achieve, for the first time, **zero-shot, pixel-level semantic segmentation without fine-tuning**, and significantly outperform baselines on distribution shift detection. Our core contribution is establishing the neuron–attention pair as a fundamental, semantically aware computational unit—yielding an interpretable, generalizable, and training-free analytical framework for multimodal vision-language models. (149 words)

Technology Category

Application Category

📝 Abstract
We present a novel technique for interpreting the neurons in CLIP-ResNet by decomposing their contributions to the output into individual computation paths. More specifically, we analyze all pairwise combinations of neurons and the following attention heads of CLIP's attention-pooling layer. We find that these neuron-head pairs can be approximated by a single direction in CLIP-ResNet's image-text embedding space. Leveraging this insight, we interpret each neuron-head pair by associating it with text. Additionally, we find that only a sparse set of the neuron-head pairs have a significant contribution to the output value, and that some neuron-head pairs, while polysemantic, represent sub-concepts of their corresponding neurons. We use these observations for two applications. First, we employ the pairs for training-free semantic segmentation, outperforming previous methods for CLIP-ResNet. Second, we utilize the contributions of neuron-head pairs to monitor dataset distribution shifts. Our results demonstrate that examining individual computation paths in neural networks uncovers interpretable units, and that such units can be utilized for downstream tasks.
Problem

Research questions and friction points this paper is trying to address.

Interpreting CLIP-ResNet neurons by decomposing their computation paths
Analyzing neuron-head pairs to understand their semantic contributions
Developing interpretable units for segmentation and distribution monitoring
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposing neuron contributions via computation paths
Approximating neuron-head pairs with embedding directions
Utilizing sparse neuron-head pairs for segmentation
🔎 Similar Papers
No similar papers found.