🤖 AI Summary
This work challenges the conventional view that Softmax in Transformer attention is indispensable due to its probabilistic interpretation, arguing instead that its empirical success stems from implicit Frobenius-norm regularization of the attention matrix, enhancing training stability.
Method: The authors theoretically establish that polynomial activation functions—without requiring non-negativity, normalization, or sparsity constraints—can equivalently enforce this norm-based regularization while preserving convergence and generalization guarantees. Their approach comprises (i) matrix-norm-theoretic modeling of attention, (ii) design of polynomial attention kernels, and (iii) end-to-end integration into standard Transformers.
Results: Experiments on language modeling and machine translation show that the proposed method matches Softmax-based baselines in accuracy, improves training stability, reduces inference latency by 12%, and—critically—demonstrates, for the first time, both theoretically and empirically, the feasibility and superiority of non-probabilistic attention mechanisms.
📝 Abstract
This paper questions whether the strong performance of softmax attention in transformers stems from producing a probability distribution over inputs. Instead, we argue that softmax's effectiveness lies in its implicit regularization of the Frobenius norm of the attention matrix, which stabilizes training. Motivated by this, we explore alternative activations, specifically polynomials, that achieve a similar regularization effect. Our theoretical analysis shows that certain polynomials can serve as effective substitutes for softmax, achieving strong performance across transformer applications despite violating softmax's typical properties of positivity, normalization, and sparsity. Extensive experiments support these findings, offering a new perspective on attention mechanisms.