Sparse but not Simpler: A Multi-Level Interpretability Analysis of Vision Transformers

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
研究通过IMPACT多层级框架评估稀疏Vision Transformer的可解释性,发现结构稀疏虽减少电路边数,但未提升语义可解释性。

Technology Category

Application Category

📝 Abstract
Sparse neural networks are often hypothesized to be more interpretable than dense models, motivated by findings that weight sparsity can produce compact circuits in language models. However, it remains unclear whether structural sparsity itself leads to improved semantic interpretability. In this work, we systematically evaluate the relationship between weight sparsity and interpretability in Vision Transformers using DeiT-III B/16 models pruned with Wanda. To assess interpretability comprehensively, we introduce \textbf{IMPACT}, a multi-level framework that evaluates interpretability across four complementary levels: neurons, layer representations, task circuits, and model-level attribution. Layer representations are analyzed using BatchTopK sparse autoencoders, circuits are extracted via learnable node masking, and explanations are evaluated with transformer attribution using insertion and deletion metrics. Our results reveal a clear structural effect but limited interpretability gains. Sparse models produce circuits with approximately $2.5\times$ fewer edges than dense models, yet the fraction of active nodes remains similar or higher, indicating that pruning redistributes computation rather than isolating simpler functional modules. Consistent with this observation, sparse models show no systematic improvements in neuron-level selectivity, SAE feature interpretability, or attribution faithfulness. These findings suggest that structural sparsity alone does not reliably yield more interpretable vision models, highlighting the importance of evaluation frameworks that assess interpretability beyond circuit compactness.
Problem

Research questions and friction points this paper is trying to address.

sparsity
interpretability
Vision Transformers
semantic interpretability
model compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

sparse neural networks
Vision Transformers
interpretability evaluation
multi-level analysis
circuit extraction
🔎 Similar Papers
No similar papers found.