Attention Sinks and Compression Valleys in LLMs are Two Sides of the Same Coin

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Attention sinks and compression valleys—previously studied in isolation—exhibit unexplained correlations in large language models (LLMs), hindering a unified understanding of hierarchical information processing. Method: We propose the “Mix-Compress-Refine” theoretical framework, unifying these phenomena by identifying their common origin: massive activation triggered by initial tokens in residual streams at intermediate layers. Through rigorous theoretical analysis and cross-scale empirical validation (410M–120B parameters), we integrate representation compression analysis, entropy reduction estimation, and targeted ablation studies. Contribution/Results: We demonstrate that the same token-induced activation simultaneously drives both attention sinks and compression valleys; formally derive the entropy-reduction bound imposed by compression; and explain task-dependent optimal layer depth variations. Our work establishes a novel paradigm for characterizing the layered computational mechanics of LLMs, advancing principled interpretability of deep transformer architectures.

Technology Category

Application Category

📝 Abstract
Attention sinks and compression valleys have attracted significant attention as two puzzling phenomena in large language models, but have been studied in isolation. In this work, we present a surprising connection between attention sinks and compression valleys, tracing both to the formation of massive activations in the residual stream. We prove theoretically that massive activations necessarily produce representational compression and establish bounds on the resulting entropy reduction. Through experiments across several models (410M-120B parameters), we confirm that when the beginning-of-sequence token develops extreme activation norms in the middle layers, both compression valleys and attention sinks emerge simultaneously. Targeted ablation studies validate our theoretical predictions. This unified view motivates us to propose the Mix-Compress-Refine theory of information flow, as an attempt to explain how LLMs organize their computation in depth by controlling attention and representational compression via massive activations. Specifically, we posit that Transformer-based LLMs process tokens in three distinct phases: (1) broad mixing in the early layers, (2) compressed computation with limited mixing in the middle layers, and (3) selective refinement in the late layers. Our framework helps explain why embedding tasks perform best at intermediate layers, whereas generation tasks benefit from full-depth processing, clarifying differences in task-dependent representations.
Problem

Research questions and friction points this paper is trying to address.

Unifying attention sinks and compression valleys through massive activations
Establishing theoretical bounds for representational compression entropy
Proposing Mix-Compress-Refine theory for Transformer computation phases
Innovation

Methods, ideas, or system contributions that make the work stand out.

Connecting attention sinks and compression valleys via activations
Proposing Mix-Compress-Refine theory for LLM computation
Identifying three-phase token processing in Transformer models