🤖 AI Summary
Standard 2D axial Rotary Position Embedding (RoPE) models positional relationships only along horizontal and vertical axes, limiting its ability to capture the rich diagonal spatial dependencies present in images. To address this, this work proposes Spiral RoPE, which introduces, for the first time within the RoPE framework, a uniformly distributed multi-directional rotary position encoding mechanism. By partitioning embedding channels into directional groups and incorporating directional projections, Spiral RoPE enables modeling of relative positions along multiple orientations in the 2D plane. This approach overcomes the constraints of traditional axial formulations and consistently improves performance across image classification, segmentation, and generation tasks. Attention visualizations further reveal that the model exhibits sharper focus on semantic objects and more accurate boundary perception.
📝 Abstract
Rotary Position Embedding (RoPE) is the de facto positional encoding in large language models due to its ability to encode relative positions and support length extrapolation. When adapted to vision transformers, the standard axial formulation decomposes two-dimensional spatial positions into horizontal and vertical components, implicitly restricting positional encoding to axis-aligned directions. We identify this directional constraint as a fundamental limitation of the standard axial 2D RoPE, which hinders the modeling of oblique spatial relationships that naturally exist in natural images. To overcome this limitation, we propose Spiral RoPE, a simple yet effective extension that enables multi-directional positional encoding by partitioning embedding channels into multiple groups associated with uniformly distributed directions. Each group is rotated according to the projection of the patch position onto its corresponding direction, allowing spatial relationships to be encoded beyond the horizontal and vertical axes. Across a wide range of vision tasks including classification, segmentation, and generation, Spiral RoPE consistently improves performance. Qualitative analysis of attention maps further show that Spiral RoPE exhibits more concentrated activations on semantically relevant objects and better respects local object boundaries, highlighting the importance of multi-directional positional encoding in vision transformers.