The Role of Sparsity for Length Generalization in Transformers

📅 2025-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates length generalization of decoder-only Transformers on sequence extrapolation beyond training lengths. We identify the core problem: standard Transformers fail to generalize to longer sequences due to implicit assumptions about positional dependencies. To address this, we propose the *Sparse Dependency Hypothesis*—a formal theoretical framework establishing that *k*-sparse token-level dependency structure is a necessary condition for length generalization. Guided by this principle, we introduce *Predictive Position Coupling* (PPC), a novel mechanism that extends position coupling to predictive, dynamic modeling while remaining compatible with standard positional embeddings. Through idealized attention analysis, synthetic tasks, and multilingual extrapolation benchmarks, we empirically validate that sparsity governs generalization capacity. PPC consistently improves extrapolation performance across long-range dependency modeling, arithmetic reasoning, and text generation. Our work provides both an interpretable theoretical foundation and a practical architectural intervention for Transformer length generalization.

Technology Category

Application Category

📝 Abstract
Training large language models to predict beyond their training context lengths has drawn much attention in recent years, yet the principles driving such behavior of length generalization remain underexplored. We propose a new theoretical framework to study length generalization for the next-token prediction task, as performed by decoder-only transformers. Conceptually, we show that length generalization occurs as long as each predicted token depends on a small (fixed) number of previous tokens. We formalize such tasks via a notion we call $k$-sparse planted correlation distributions, and show that an idealized model of transformers which generalize attention heads successfully length-generalize on such tasks. As a bonus, our theoretical model justifies certain techniques to modify positional embeddings which have been introduced to improve length generalization, such as position coupling. We support our theoretical results with experiments on synthetic tasks and natural language, which confirm that a key factor driving length generalization is a ``sparse'' dependency structure of each token on the previous ones. Inspired by our theory, we introduce Predictive Position Coupling, which trains the transformer to predict the position IDs used in a positional coupling approach. Predictive Position Coupling thereby allows us to broaden the array of tasks to which position coupling can successfully be applied to achieve length generalization.
Problem

Research questions and friction points this paper is trying to address.

study length generalization in transformers
explore sparse dependency structures
introduce Predictive Position Coupling technique
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse dependency structure for generalization
Predictive Position Coupling technique
Idealized transformer model with attention heads
🔎 Similar Papers
No similar papers found.