Scale-invariant Attention

πŸ“… 2025-05-20
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Large language models (LLMs) trained on short contexts exhibit poor generalization to long-context inference due to attention’s sensitivity to sequence length scaling. Method: This paper proposes scale-invariant attention, formally defining two necessary conditions for scale invariance and deriving a provably robust, position-dependent logits transformation under Gaussian positional assumptions; it further incorporates a sparsity constraint to enhance both efficiency and generalization. Crucially, the method enables zero-shot transfer to longer contexts without extending training sequence lengths. Results: It achieves significant loss reduction on standard validation sets and substantially outperforms baselines on long-range retrieval tasks. The core contributions are: (i) a theory-driven modeling of scale robustness in attention, and (ii) a lightweight, provably effective attention reconstruction framework that preserves performance while ensuring scalability.

Technology Category

Application Category

πŸ“ Abstract
One persistent challenge in LLM research is the development of attention mechanisms that are able to generalise from training on shorter contexts to inference on longer contexts. We propose two conditions that we expect all effective long context attention mechanisms to have: scale-invariant total attention, and scale-invariant attention sparsity. Under a Gaussian assumption, we show that a simple position-dependent transformation of the attention logits is sufficient for these conditions to hold. Experimentally we find that the resulting scale-invariant attention scheme gives considerable benefits in terms of validation loss when zero-shot generalising from training on short contexts to validation on longer contexts, and is effective at long-context retrieval.
Problem

Research questions and friction points this paper is trying to address.

Develop attention mechanisms generalizing from short to long contexts
Propose scale-invariant conditions for effective long-context attention
Enhance zero-shot generalization and long-context retrieval performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scale-invariant attention mechanism for LLMs
Position-dependent transformation of attention logits
Effective zero-shot generalization to longer contexts