Information Entropy Invariance: Enhancing Length Extrapolation in Attention Mechanisms

📅 2025-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models suffer from weak long-context extrapolation and attention distribution instability—characterized by entropy growth—as sequence length increases. Method: This paper proposes a novel attention scaling mechanism grounded in information entropy invariance. Theoretically, it establishes the first analytical framework for entropy invariance, identifying attention score dilution as the fundamental bottleneck in long-range modeling. Methodologically, it introduces two entropy-constrained scaling schemes: the training-free InfoScale and the theory-driven CosScale, which stabilize attention distributions via information entropy regularization in dot-product and cosine spaces, respectively. Contribution/Results: Evaluated on the GAU-α architecture, the approach extends the context window to 64× the training length while outperforming seven state-of-the-art methods, achieving new SOTA performance on comparable long-context tasks.

Technology Category

Application Category

📝 Abstract
Improving the length extrapolation capabilities of Large Language Models (LLMs) remains a critical challenge in natural language processing. Many recent efforts have focused on modifying the scaled dot-product attention mechanism, and often introduce scaled temperatures without rigorous theoretical justification. To fill this gap, we introduce a novel approach based on information entropy invariance. We propose two new scaled temperatures to enhance length extrapolation. First, a training-free method InfoScale is designed for dot-product attention, and preserves focus on original tokens during length extrapolation by ensuring information entropy remains consistent. Second, we theoretically analyze the impact of scaling (CosScale) on cosine attention. Experimental data demonstrates that combining InfoScale and CosScale achieves state-of-the-art performance on the GAU-{alpha} model with a context window extended to 64 times the training length, and outperforms seven existing methods. Our analysis reveals that significantly increasing CosScale approximates windowed attention, and highlights the significance of attention score dilution as a key challenge in long-range context handling. The code and data are available at https://github.com/HT-NEKO/InfoScale.
Problem

Research questions and friction points this paper is trying to address.

Long Sentence Processing
Attention Mechanism Limitations
Attention Dispersion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Information Entropy Invariance
InfoScale
CosScale
🔎 Similar Papers
No similar papers found.
K
Kewei Li
College of Computer Science and Technology, and Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, China, 130012
Y
Yanwen Kong
College of Computer Science and Technology, and Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, China, 130012
Y
Yiping Xu
College of Computer Science and Technology, and Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, China, 130012
L
Lan Huang
College of Computer Science and Technology, and Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, China, 130012
Ruochi Zhang
Ruochi Zhang
College of Computer Science and Technology, and Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, China, 130012
Fengfeng Zhou
Fengfeng Zhou
Bioinformatics, Data Analytics
Big datafeature engineering and selectionhealth informaticsbioinformaticsdata mining