Rethinking Attention: Polynomial Alternatives to Softmax in Transformers

📅 2024-10-24
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
This work challenges the conventional view that Softmax in Transformer attention is indispensable due to its probabilistic interpretation, arguing instead that its empirical success stems from implicit Frobenius-norm regularization of the attention matrix, enhancing training stability. Method: The authors theoretically establish that polynomial activation functions—without requiring non-negativity, normalization, or sparsity constraints—can equivalently enforce this norm-based regularization while preserving convergence and generalization guarantees. Their approach comprises (i) matrix-norm-theoretic modeling of attention, (ii) design of polynomial attention kernels, and (iii) end-to-end integration into standard Transformers. Results: Experiments on language modeling and machine translation show that the proposed method matches Softmax-based baselines in accuracy, improves training stability, reduces inference latency by 12%, and—critically—demonstrates, for the first time, both theoretically and empirically, the feasibility and superiority of non-probabilistic attention mechanisms.

Technology Category

Application Category

📝 Abstract
This paper questions whether the strong performance of softmax attention in transformers stems from producing a probability distribution over inputs. Instead, we argue that softmax's effectiveness lies in its implicit regularization of the Frobenius norm of the attention matrix, which stabilizes training. Motivated by this, we explore alternative activations, specifically polynomials, that achieve a similar regularization effect. Our theoretical analysis shows that certain polynomials can serve as effective substitutes for softmax, achieving strong performance across transformer applications despite violating softmax's typical properties of positivity, normalization, and sparsity. Extensive experiments support these findings, offering a new perspective on attention mechanisms.
Problem

Research questions and friction points this paper is trying to address.

Exploring polynomial alternatives to softmax in transformers
Understanding softmax's regularization effect on attention
Validating non-softmax attention mechanisms in applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Polynomial activations replace softmax in transformers
Frobenius norm regularization stabilizes training effectively
Polynomials violate softmax properties but perform well
🔎 Similar Papers
No similar papers found.