Subquadratic Algorithms and Hardness for Attention with Any Temperature

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Standard attention computation incurs O(n²) time complexity, severely limiting efficiency for long-context modeling. Method: We systematically investigate the feasibility and fundamental limits of subquadratic attention under arbitrary input magnitude B—without assuming bounded softmax temperature—and develop novel algorithmic and complexity-theoretic techniques. Contribution/Results: We propose the first algorithm achieving Õ(n^{2−1/d}·polylog B) time for constant d; establish the first subquadratic gradient and training complexity for low-rank attention in high-dimensional settings (large d); prove, via fine-grained complexity (SETH), that n^{2−o(1)} is a tight lower bound when d = 2^{Θ(log* n)}; and confirm the optimality of standard attention in poly(n)-dimensional regimes. Our work breaks the long-standing quadratic barrier, providing both theoretical foundations and practical pathways for ultra-long-context attention.

Technology Category

Application Category

📝 Abstract
Despite the popularity of the Transformer architecture, the standard algorithm for computing Attention suffers from quadratic time complexity in context length $n$. Alman and Song [NeurIPS 2023] showed that when the head dimension $d = Theta(log n)$, subquadratic Attention is possible if and only if the inputs have small entries bounded by $B = o(sqrt{log n})$ in absolute values, under the Strong Exponential Time Hypothesis ($mathsf{SETH}$). Equivalently, subquadratic Attention is possible if and only if the softmax is applied with high temperature for $d=Theta(log n)$. Running times of these algorithms depend exponentially on $B$ and thus they do not lead to even a polynomial-time algorithm outside the specific range of $B$. This naturally leads to the question: when can Attention be computed efficiently without strong assumptions on temperature? Are there fast attention algorithms that scale polylogarithmically with entry size $B$? In this work, we resolve this question and characterize when fast Attention for arbitrary temperatures is possible. First, for all constant $d = O(1)$, we give the first subquadratic $ ilde{O}(n^{2 - 1/d} cdot mathrm{polylog}(B))$ time algorithm for Attention with large $B$. Our result holds even for matrices with large head dimension if they have low rank. In this regime, we also give a similar running time for Attention gradient computation, and therefore for the full LLM training process. Furthermore, we show that any substantial improvement on our algorithm is unlikely. In particular, we show that even when $d = 2^{Theta(log^* n)}$, Attention requires $n^{2 - o(1)}$ time under $mathsf{SETH}$. Finally, in the regime where $d = mathrm{poly}(n)$, we show that the standard algorithm is optimal under popular fine-grained complexity assumptions.
Problem

Research questions and friction points this paper is trying to address.

Develop subquadratic Attention algorithms for arbitrary temperatures
Characterize conditions for efficient Attention computation without strong assumptions
Establish optimality and limitations of Attention algorithms under complexity hypotheses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Subquadratic algorithm for large entry sizes
Low rank matrices enable efficient computation
Optimality under fine-grained complexity assumptions
🔎 Similar Papers
No similar papers found.