π€ AI Summary
This work proposes a Yat-kernel-based spherical linearized attention mechanism to address the high computational complexity of standard Transformer attention, which hinders scalability to long sequences. By constraining queries and keys onto the unit hypersphere, the method renders attention dependent solely on angular alignment. Leveraging Bernsteinβs theorem, it constructs non-negative random feature approximations that guarantee strict positive definiteness while achieving linear time complexity O(L). As the first linear attention scheme integrating geometry-aware Yat kernels with spherical constraints, the approach matches the performance of standard softmax attention almost exactly and significantly outperforms existing methods such as Performer and Cosformer.
π Abstract
We propose a new class of linear-time attention mechanisms based on a relaxed and computationally efficient formulation of the recently introduced E-Product, often referred to as the Yat-kernel (Bouhsine, 2025). The resulting interactions are geometry-aware and inspired by inverse-square interactions in physics. Our method, Spherical Linearized Attention with Yat Kernels (SLAY), constrains queries and keys to the unit sphere so that attention depends only on angular alignment. Using Bernstein's theorem, we express the spherical Yat-kernel as a nonnegative mixture of polynomial-exponential product kernels and derive a strictly positive random-feature approximation enabling linear-time O(L) attention. We establish positive definiteness and boundedness on the sphere and show that the estimator yields well-defined, nonnegative attention scores. Empirically, SLAY achieves performance that is nearly indistinguishable from standard softmax attention while retaining linear time and memory scaling, and consistently outperforms prior linear-time attention mechanisms such as Performers and Cosformers. To the best of our knowledge, SLAY represents the closest linear-time approximation to softmax attention reported to date, enabling scalable Transformers without the typical performance trade-offs of attention linearization.