🤖 AI Summary
This work challenges the prevailing assumption that Rotary Position Embedding (RoPE) must preserve positional equivariance—i.e., strictly encode relative positions—in vision tasks. To this end, we propose Spherical RoPE: the first RoPE variant constructed under a non-commutative generator assumption, employing spherical parameterization to realize non-equivariant positional encoding. Evaluated across image classification, object detection, and semantic segmentation, Spherical RoPE matches or surpasses standard equivariant RoPE in performance while using fewer parameters and incurring lower computational overhead. Empirical results demonstrate that explicit modeling of relative positional relationships is not necessary for performance gains in vision Transformers—contradicting the conventional belief that relative position encoding dominates model efficacy. Our work establishes a new paradigm for designing efficient, generalizable non-equivariant positional encodings and provides theoretical grounding for their viability in visual representation learning.
📝 Abstract
Rotary Positional Encodings (RoPE) have emerged as a highly effective technique for one-dimensional sequences in Natural Language Processing spurring recent progress towards generalizing RoPE to higher-dimensional data such as images and videos. The success of RoPE has been thought to be due to its positional equivariance, i.e. its status as a relative positional encoding. In this paper, we mathematically show RoPE to be one of the most general solutions for equivariant positional embedding in one-dimensional data. Moreover, we show Mixed RoPE to be the analogously general solution for M-dimensional data, if we require commutative generators -- a property necessary for RoPE's equivariance. However, we question whether strict equivariance plays a large role in RoPE's performance. We propose Spherical RoPE, a method analogous to Mixed RoPE, but assumes non-commutative generators. Empirically, we find Spherical RoPE to have the equivalent or better learning behavior compared to its equivariant analogues. This suggests that relative positional embeddings are not as important as is commonly believed, at least within computer vision. We expect this discovery to facilitate future work in positional encodings for vision that can be faster and generalize better by removing the preconception that they must be relative.