🤖 AI Summary
Standard softmax attention often yields noisy attention distributions, impairing the model’s ability to select salient tokens in long-context scenarios. To address this, we propose Focal Attention—a lightweight mechanism that sharpens attention distributions via learnable or fixed temperature scaling of the softmax operation, thereby enhancing focus on critical tokens. Integrated seamlessly into standard Transformer architectures, Focal Attention is optimized end-to-end without architectural modifications. Experiments across multiple benchmarks demonstrate that, at comparable accuracy, it reduces parameter count by up to 42% and training data requirements by up to 33%; on long-context tasks, it achieves relative performance gains of 17%–82%. The core innovation lies in adaptive, temperature-based attention sharpening—achieving improved feature selection efficiency and model scalability while preserving generalization and computational efficiency.
📝 Abstract
Attention is a core component of transformer architecture, whether encoder-only, decoder-only, or encoder-decoder model. However, the standard softmax attention often produces noisy probability distribution, which can impair effective feature selection at every layer of these models, particularly for long contexts. We propose Focal Attention, a simple yet effective modification that sharpens the attention distribution by controlling the softmax temperature, either as a fixed hyperparameter or as a learnable parameter during training. This sharpening enables the model to concentrate on the most relevant tokens while suppressing irrelevant ones. Empirically, Focal Attention scales more favorably than standard transformer with respect to model size, training data, and context length. Across diverse benchmarks, it achieves the same accuracy with up to 42% fewer parameters or 33% less training data. On long-context tasks, it delivers substantial relative improvements ranging from 17% to 82%, demonstrating its effectiveness in real world applications.