SageAttention2++: A More Efficient Implementation of SageAttention2

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the computational inefficiency of standard attention mechanisms—whose O(n²) time complexity becomes prohibitive for long sequences—this paper proposes a hardware-aware attention optimization. We introduce, for the first time, a dedicated instruction for FP8-precision matrix multiplication with FP16 accumulation (FP8 Matmul → FP16 Accumulate), and redesign the attention kernel accordingly. Our approach achieves identical numerical precision to SageAttention2 while delivering 3.9× speedup over FlashAttention and 2× over SageAttention2; end-to-end generation quality degradation is negligible, and the method natively supports multimodal large language model architectures. The core contribution lies in a software–hardware co-design: leveraging low-precision computation for throughput gains while preserving accuracy via high-precision accumulation—thereby breaking the traditional efficiency–accuracy trade-off frontier in attention computation.

Technology Category

Application Category

📝 Abstract
The efficiency of attention is critical because its time complexity grows quadratically with sequence length. SageAttention2 addresses this by utilizing quantization to accelerate matrix multiplications (Matmul) in attention. To further accelerate SageAttention2, we propose to utilize the faster instruction of FP8 Matmul accumulated in FP16. The instruction is 2x faster than the FP8 Matmul used in SageAttention2. Our experiments show that SageAttention2++ achieves a 3.9x speedup over FlashAttention while maintaining the same attention accuracy as SageAttention2. This means SageAttention2++ effectively accelerates various models, including those for language, image, and video generation, with negligible end-to-end metrics loss. The code will be available at https://github.com/thu-ml/SageAttention.
Problem

Research questions and friction points this paper is trying to address.

Improving attention efficiency with quadratic time complexity
Accelerating FP8 matrix multiplications in attention mechanisms
Maintaining accuracy while speeding up language and vision models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes FP8 Matmul with FP16 accumulation
Achieves 3.9x speedup over FlashAttention
Maintains accuracy with negligible metrics loss
🔎 Similar Papers
No similar papers found.