🤖 AI Summary
This work addresses the absence of rotation-equivariant positional encodings—specifically Rotary Position Embeddings (RoPE)—for non-grid graph-structured data. We propose WIRE, the first general-purpose RoPE extension to arbitrary graphs. WIRE constructs rotational transformations grounded in graph wavelet analysis, inherently satisfying permutation equivariance over node orderings and compatibility with linear attention mechanisms. Under mild conditions, it asymptotically approximates graph resistance distance, thereby explicitly encoding structural similarity. Parameter-free and plug-and-play, WIRE integrates seamlessly into diverse graph neural networks without architectural modification. Extensive experiments demonstrate that WIRE consistently outperforms existing positional encoding methods on tasks including monochromatic subgraph detection, point cloud semantic segmentation, and multiple standard graph benchmarks. Gains are especially pronounced on structure-sensitive tasks, validating both its theoretical foundation and empirical efficacy.
📝 Abstract
We introduce WIRE: Wavelet-Induced Rotary Encodings. WIRE extends Rotary Position Encodings (RoPE), a popular algorithm in LLMs and ViTs, to graph-structured data. We demonstrate that WIRE is more general than RoPE, recovering the latter in the special case of grid graphs. WIRE also enjoys a host of desirable theoretical properties, including equivariance under node ordering permutation, compatibility with linear attention, and (under select assumptions) asymptotic dependence on graph resistive distance. We test WIRE on a range of synthetic and real-world tasks, including identifying monochromatic subgraphs, semantic segmentation of point clouds, and more standard graph benchmarks. We find it to be effective in settings where the underlying graph structure is important.