Effective Length Extrapolation via Dimension-Wise Positional Embeddings Manipulation

📅 2025-04-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer significant performance degradation when input sequence length exceeds their pretrained context window; existing context extrapolation methods typically require fine-tuning or compromise inference efficiency. To address this, we propose a training-free, fine-grained context extension method grounded in RoPE-based positional encoding analysis: by differentially examining hidden dimensions, we identify and rescale critical dimension-specific position indices, enabling dimension-level positional embedding manipulation. Our approach is fully compatible with Flash Attention 2, reuses original position indices and embeddings, and modifies no model weights. Experiments demonstrate that Llama3-8B supports 128K-context inference, while Llama3.1-70B achieves an average +18-point gain on the RULER benchmark—outperforming GPT-4-128K. This work establishes the first high-accuracy, zero-fine-tuning, and computationally efficient long-context extrapolation framework.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) often struggle to process and generate coherent context when the number of input tokens exceeds the pre-trained length. Recent advancements in long-context extension have significantly expanded the context window of LLMs but require expensive overhead to train the large-scale models with longer context. In this work, we propose Dimension-Wise Positional Embeddings Manipulation (DPE), a training-free framework to extrapolate the context window of LLMs by diving into RoPE's different hidden dimensions. Instead of manipulating all dimensions equally, DPE detects the effective length for every dimension and finds the key dimensions for context extension. We reuse the original position indices with their embeddings from the pre-trained model and manipulate the key dimensions' position indices to their most effective lengths. In this way, DPE adjusts the pre-trained models with minimal modifications while ensuring that each dimension reaches its optimal state for extrapolation. DPE significantly surpasses well-known baselines such as YaRN and Self-Extend. DPE enables Llama3-8k 8B to support context windows of 128k tokens without continual training and integrates seamlessly with Flash Attention 2. In addition to its impressive extrapolation capability, DPE also dramatically improves the models' performance within training length, such as Llama3.1 70B, by over 18 points on popular long-context benchmarks RULER. When compared with commercial models, Llama 3.1 70B with DPE even achieves better performance than GPT-4-128K.
Problem

Research questions and friction points this paper is trying to address.

Extends LLM context windows without costly retraining
Improves RoPE embeddings via dimension-wise optimization
Enables long-context processing (e.g., 128k tokens)
Innovation

Methods, ideas, or system contributions that make the work stand out.

Manipulates RoPE's hidden dimensions for extrapolation
Reuses original position indices with minimal modifications
Enables long-context support without continual training
🔎 Similar Papers
No similar papers found.
Y
Yi Lu
Fudan University
W
Wanxu Zhao
Fudan University
X
Xin Zhou
Fudan University
Chenxin An
Chenxin An
The University of Hong Kong
Long-context LLMs
C
Chenglong Wang
Northeastern University
S
Shuo Li
Fudan University
Yuming Yang
Yuming Yang
Fudan University
Natural Language ProcessingLarge Language Models
J
Jun Zhao
Fudan University
Tao Ji
Tao Ji
中国人民大学
T
Tao Gui
Fudan University
Q
Qi Zhang
Fudan University
X
Xuanjing Huang
Fudan University