LaMPE: Length-aware Multi-grained Position Encoding for Adaptive Long-context Scaling Without Training

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the significant performance degradation of large language models (LLMs) on inputs exceeding their pretrained context window—primarily caused by RoPE extrapolation failure—this paper proposes a training-free, length-aware, multi-granularity positional encoding method. The core innovation lies in (1) a parameterized sigmoid scaling function that dynamically maps input length to position indices, and (2) a multi-granularity attention mechanism that adaptively allocates positional encoding resolution across different token intervals, balancing fine-grained local modeling with long-range dependency capture. Fully compatible with standard RoPE architectures, the method requires no architectural modification or fine-tuning and is plug-and-play. Extensive experiments across three mainstream LLMs and five long-context benchmarks demonstrate substantial improvements over existing extrapolation techniques, notably enhancing long-context comprehension without additional training overhead.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) experience significant performance degradation when the input exceeds the pretraining context window, primarily due to the out-of-distribution (OOD) behavior of Rotary Position Embedding (RoPE). Recent studies mitigate this problem by remapping OOD positions into the in-distribution range with fixed mapping strategies, ignoring the dynamic relationship between input length and the model's effective context window. To this end, we propose Length-aware Multi-grained Positional Encoding (LaMPE), a training-free method that fully utilizes the model's effective context window for adaptive long-context scaling in LLMs. Motivated by the left-skewed frequency distribution of relative positions, LaMPE establishes a dynamic relationship between mapping length and input length through a parametric scaled sigmoid function to adaptively allocate positional capacity across varying input lengths. Meanwhile, LaMPE devises a novel multi-grained attention mechanism that strategically allocates positional resolution across different sequence regions to capture both fine-grained locality and long-range dependencies. Our method can be seamlessly applied to a wide range of RoPE-based LLMs without training. Extensive experiments on three representative LLMs across five mainstream long-context benchmarks demonstrate that LaMPE achieves significant performance improvements compared to existing length extrapolation methods. The code will be released at https://github.com/scar-on/LaMPE.
Problem

Research questions and friction points this paper is trying to address.

Addresses performance drop in LLMs with long inputs
Improves Rotary Position Embedding for varied input lengths
Enables adaptive long-context scaling without retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic length-aware position encoding
Multi-grained attention mechanism
Training-free adaptive context scaling
🔎 Similar Papers
No similar papers found.
S
Sikui Zhang
Beijing Key Laboratory of Super Intelligent Security of Multi-Modal Information, CASIA; State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA; School of Artificial Intelligence, University of Chinese Academy of Sciences
G
Guangze Gao
Beijing Key Laboratory of Super Intelligent Security of Multi-Modal Information, CASIA; State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA
Z
Ziyun Gan
Beijing Key Laboratory of Super Intelligent Security of Multi-Modal Information, CASIA; State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA
Chunfeng Yuan
Chunfeng Yuan
National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
computer visionPattern RecognitionMachine LearningHuman Action RecognitionSparse Representation
Z
Zefeng Lin
Beijing Key Laboratory of Super Intelligent Security of Multi-Modal Information, CASIA; State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA
Houwen Peng
Houwen Peng
Microsoft Research
Computer VisionMachine LearningEfficient Deep LearningLarge Language Models
B
Bing Li
Beijing Key Laboratory of Super Intelligent Security of Multi-Modal Information, CASIA; State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA
W
Weiming Hu
Beijing Key Laboratory of Super Intelligent Security of Multi-Modal Information, CASIA; State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA