π€ AI Summary
This work proposes Affine-Scaled Attention, a novel attention mechanism that addresses the limitations of standard softmax-based attention in Transformers. The conventional softmax enforces that attention weights sum to one, which restricts flexible modulation of attention magnitudes and can lead to training instability or overly concentrated attention distributions. To overcome this, the proposed method introduces input-dependent scaling factors and bias terms after softmax normalization, enabling lightweight affine transformations that flexibly reweight attention outputs while preserving the modelβs capacity for value aggregation. Experiments across Transformer models of varying scales demonstrate that Affine-Scaled Attention significantly improves training stability, optimization dynamics, and downstream task performance, consistently outperforming both standard softmax attention and recent alternatives such as Attention Sink.
π Abstract
Transformer attention is typically implemented using softmax normalization, which enforces attention weights with unit sum normalization. While effective in many settings, this constraint can limit flexibility in controlling attention magnitudes and may contribute to overly concentrated or unstable attention patterns during training. Prior work has explored modifications such as attention sinks or gating mechanisms, but these approaches provide only limited or indirect control over attention reweighting. We propose Affine-Scaled Attention, a simple extension to standard attention that introduces input-dependent scaling and a corresponding bias term applied to softmax-normalized attention weights. This design relaxes the strict normalization constraint while maintaining aggregation of value representations, allowing the model to adjust both the relative distribution and the scale of attention in a controlled manner.
We empirically evaluate Affine-Scaled Attention in large-scale language model pretraining across multiple model sizes. Experimental results show consistent improvements in training stability, optimization behavior, and downstream task performance compared to standard softmax attention and attention sink baselines. These findings suggest that modest reweighting of attention outputs provides a practical and effective way to improve attention behavior in Transformer models.