🤖 AI Summary
Existing positional encodings (e.g., RoPE) suffer from attention oscillation and unstable long-range dependency modeling in long-sequence scenarios. This paper proposes Hyperbolic Rotational Positional Encoding (HoPE), the first to incorporate Lorentz transformations from hyperbolic geometry into positional encoding, modeling relative positions via rotations on a hyperboloid. We theoretically prove that RoPE emerges as a Euclidean-limit special case of HoPE; moreover, HoPE guarantees monotonic decay of attention weights with respect to token distance, substantially enhancing stability in long-range dependency modeling. Experiments on long-sequence benchmarks—including PG-19 and WikiText—demonstrate that HoPE consistently outperforms RoPE, Alibi, and other baselines, achieving lower perplexity and superior length extrapolation capability. HoPE thus establishes a more robust and geometrically principled paradigm for positional encoding in long-context Transformers.
📝 Abstract
Positional encoding mechanisms enable Transformers to model sequential structure and long-range dependencies in text. While absolute positional encodings struggle with extrapolation to longer sequences due to fixed positional representations, and relative approaches like Alibi exhibit performance degradation on extremely long contexts, the widely-used Rotary Positional Encoding (RoPE) introduces oscillatory attention patterns that hinder stable long-distance dependency modelling. We address these limitations through a geometric reformulation of positional encoding. Drawing inspiration from Lorentz transformations in hyperbolic geometry, we propose Hyperbolic Rotary Positional Encoding (HoPE), which leverages hyperbolic functions to implement Lorentz rotations on token representations. Theoretical analysis demonstrates that RoPE is a special case of our generalized formulation. HoPE fundamentally resolves RoPE's slation issues by enforcing monotonic decay of attention weights with increasing token distances. Extensive experimental results, including perplexity evaluations under several extended sequence benchmarks, show that HoPE consistently exceeds existing positional encoding methods. These findings underscore HoPE's enhanced capacity for representing and generalizing long-range dependencies. Data and code will be available.