π€ AI Summary
Softmax-based soft attention suffers from quadratic computational and memory complexity in long sequences, while linear attention improves efficiency at the cost of significant modeling accuracy degradation. This paper establishes, for the first time, a strict equivalence between Softmax attention and a recurrent neural network (RNN), enabling analytically tractable structural decomposition. Through component-wise ablation analysis, we demonstrate that the Softmax nonlinearity is indispensable for ensuring stable sequence state updates and effective long-range dependency modeling; linear attention is shown to be a low-order approximation of this RNN, with its performance loss stemming from the omission of critical nonlinear dynamics. Our theoretical analysis identifies the precise source of Softmax attentionβs expressive superiority, providing a novel analytical framework and principled design guidelines for developing efficient yet accurate attention mechanisms.
π Abstract
Since its introduction, softmax attention has become the backbone of modern transformer architectures due to its expressiveness and scalability across a wide range of tasks. However, the main drawback of softmax attention is the quadratic memory requirement and computational complexity with respect to the sequence length. By replacing the softmax nonlinearity, linear attention and similar methods have been introduced to avoid the quadratic bottleneck of softmax attention. Despite these linear forms of attention being derived from the original softmax formulation, they typically lag in terms of downstream accuracy. While strong intuition of the softmax nonlinearity on the query and key inner product suggests that it has desirable properties compared to other nonlinearities, the question of why this discrepancy exists still remains unanswered. This work demonstrates that linear attention is an approximation of softmax attention by deriving the recurrent form of softmax attention. Using this form, each part of softmax attention can be described in the language of recurrent neural networks (RNNs). Describing softmax attention as an RNN allows for the ablation of the components of softmax attention to understand the importance of each part and how they interact. In this way, our work helps explain why softmax attention is more expressive than its counterparts.