🤖 AI Summary
Transformer attention mechanisms suffer from O(N²) computational complexity, and existing quantization methods primarily target linear layers while neglecting the attention bottleneck. Method: This paper proposes the first high-precision 8-bit quantization scheme specifically designed for softmax and scaled dot-product attention. It introduces dynamic range calibration, gradient-aware pseudo-quantized training, and kernel-level integer quantization of attention operations—fully compatible with the FlashAttention interface. Contribution/Results: Experiments show that our method achieves 2.1× and 2.7× higher inference OPS than FlashAttention2 and xformers, respectively, while surpassing FlashAttention3 in accuracy. End-to-end performance on LLMs and image/video generation tasks remains virtually lossless. The solution is plug-and-play, enabling efficient, high-fidelity inference without architectural modification.
📝 Abstract
The transformer architecture predominates across various models. As the heart of the transformer, attention has a computational complexity of O(N^2), compared to O(N) for linear transformations. When handling large sequence lengths, attention becomes the primary time-consuming component. Although quantization has proven to be an effective method for accelerating model inference, existing quantization methods primarily focus on optimizing the linear layer. In response, we first analyze the feasibility of quantization in attention detailedly. Following that, we propose SageAttention, a highly efficient and accurate quantization method for attention. The OPS (operations per second) of our approach outperforms FlashAttention2 and xformers by about 2.1 times and 2.7 times, respectively. SageAttention also achieves superior accuracy performance over FlashAttention3. Comprehensive experiments confirm that our approach incurs almost no end-to-end metrics loss across diverse models, including those for large language processing, image generation, and video generation. The codes are available at https://github.com/thu-ml/SageAttention.