From Weights to Concepts: Data-Free Interpretability of CLIP via Singular Vector Decomposition

πŸ“… 2026-03-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing interpretability methods for vision-language models rely on data-driven activations, rendering them susceptible to biases and limited by coarse granularity. This work proposes SITH, a framework that achieves, for the first time, fully data-agnostic, fine-grained interpretation of CLIP’s weight space. By applying singular vector decomposition directly to the visual Transformer weights and integrating the COMP algorithm, SITH decomposes each singular vector into a sparse combination of human-understandable concepts. This approach enables precise concept-level model editing and semantic basis analysis without requiring any training or input data, yielding high-fidelity, semantically coherent interpretations of individual attention heads. Experiments demonstrate that SITH can effectively enhance or suppress specific concepts to improve downstream task performance and reveal that fine-tuning primarily operates by reweighting a stable set of semantic bases.

Technology Category

Application Category

πŸ“ Abstract
As vision-language models are deployed at scale, understanding their internal mechanisms becomes increasingly critical. Existing interpretability methods predominantly rely on activations, making them dataset-dependent, vulnerable to data bias, and often restricted to coarse head-level explanations. We introduce SITH (Semantic Inspection of Transformer Heads), a fully data-free, training-free framework that directly analyzes CLIP's vision transformer in weight space. For each attention head, we decompose its value-output matrix into singular vectors and interpret each one via COMP (Coherent Orthogonal Matching Pursuit), a new algorithm that explains them as sparse, semantically coherent combinations of human-interpretable concepts. We show that SITH yields coherent, faithful intra-head explanations, validated through reconstruction fidelity and interpretability experiments. This allows us to use SITH for precise, interpretable weight-space model edits that amplify or suppress specific concepts, improving downstream performance without retraining. Furthermore, we use SITH to study model adaptation, showing how fine-tuning primarily reweights a stable semantic basis rather than learning entirely new features.
Problem

Research questions and friction points this paper is trying to address.

interpretability
vision-language models
data-free
CLIP
attention heads
Innovation

Methods, ideas, or system contributions that make the work stand out.

data-free interpretability
singular vector decomposition
semantic concept extraction
weight-space model editing
CLIP explainability
πŸ”Ž Similar Papers
No similar papers found.