HoPE: Hyperbolic Rotary Positional Encoding for Stable Long-Range Dependency Modeling in Large Language Models

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing positional encodings (e.g., RoPE) suffer from attention oscillation and unstable long-range dependency modeling in long-sequence scenarios. This paper proposes Hyperbolic Rotational Positional Encoding (HoPE), the first to incorporate Lorentz transformations from hyperbolic geometry into positional encoding, modeling relative positions via rotations on a hyperboloid. We theoretically prove that RoPE emerges as a Euclidean-limit special case of HoPE; moreover, HoPE guarantees monotonic decay of attention weights with respect to token distance, substantially enhancing stability in long-range dependency modeling. Experiments on long-sequence benchmarks—including PG-19 and WikiText—demonstrate that HoPE consistently outperforms RoPE, Alibi, and other baselines, achieving lower perplexity and superior length extrapolation capability. HoPE thus establishes a more robust and geometrically principled paradigm for positional encoding in long-context Transformers.

Technology Category

Application Category

📝 Abstract
Positional encoding mechanisms enable Transformers to model sequential structure and long-range dependencies in text. While absolute positional encodings struggle with extrapolation to longer sequences due to fixed positional representations, and relative approaches like Alibi exhibit performance degradation on extremely long contexts, the widely-used Rotary Positional Encoding (RoPE) introduces oscillatory attention patterns that hinder stable long-distance dependency modelling. We address these limitations through a geometric reformulation of positional encoding. Drawing inspiration from Lorentz transformations in hyperbolic geometry, we propose Hyperbolic Rotary Positional Encoding (HoPE), which leverages hyperbolic functions to implement Lorentz rotations on token representations. Theoretical analysis demonstrates that RoPE is a special case of our generalized formulation. HoPE fundamentally resolves RoPE's slation issues by enforcing monotonic decay of attention weights with increasing token distances. Extensive experimental results, including perplexity evaluations under several extended sequence benchmarks, show that HoPE consistently exceeds existing positional encoding methods. These findings underscore HoPE's enhanced capacity for representing and generalizing long-range dependencies. Data and code will be available.
Problem

Research questions and friction points this paper is trying to address.

Addresses unstable long-range dependency modeling in Transformers
Resolves oscillatory attention patterns in Rotary Positional Encoding
Enhances generalization to extremely long sequences through hyperbolic geometry
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hyperbolic geometry for positional encoding
Lorentz rotations on token representations
Monotonic attention decay for long-range dependencies
🔎 Similar Papers
No similar papers found.
C
Chang Dai
Peking University
H
Hongyu Shan
Tianjin University
M
Mingyang Song
Tencent
Di Liang
Di Liang
University of Michigan
diode lasersSi photonicsphotonic integrated circuitsnanofabrication