Rectified SpaAttn: Revisiting Attention Sparsity for Efficient Video Generation

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing sparse attention methods in video generation suffer from systematic bias—over-amplifying weights of salient tokens while completely neglecting non-salient ones—leading to degraded attention fidelity and generation performance. This work is the first to identify and characterize this bias, proposing an implicit full-attention reference mechanism that jointly optimizes salient and non-salient token contributions via isolated pooling redistribution and gain-aware correction. The approach significantly improves alignment between sparse and full-attention maps. Furthermore, we introduce error-aware reweighting, multimodal pooling, and Triton-optimized kernels to accelerate inference. Our method achieves 3.33× and 2.08× speedups on HunyuanVideo and Wan 2.1, respectively, without compromising generation quality. Code is publicly available.

Technology Category

Application Category

📝 Abstract
Diffusion Transformers dominate video generation, but the quadratic complexity of attention computation introduces substantial latency. Attention sparsity reduces computational costs by focusing on critical tokens while ignoring non-critical tokens. However, existing methods suffer from severe performance degradation. In this paper, we revisit attention sparsity and reveal that existing methods induce systematic biases in attention allocation: (1) excessive focus on critical tokens amplifies their attention weights; (2) complete neglect of non-critical tokens causes the loss of relevant attention weights. To address these issues, we propose Rectified SpaAttn, which rectifies attention allocation with implicit full attention reference, thereby enhancing the alignment between sparse and full attention maps. Specifically: (1) for critical tokens, we show that their bias is proportional to the sparse attention weights, with the ratio governed by the amplified weights. Accordingly, we propose Isolated-Pooling Attention Reallocation, which calculates accurate rectification factors by reallocating multimodal pooled weights. (2) for non-critical tokens, recovering attention weights from the pooled query-key yields attention gains but also introduces pooling errors. Therefore, we propose Gain-Aware Pooling Rectification, which ensures that the rectified gain consistently surpasses the induced error. Moreover, we customize and integrate the Rectified SpaAttn kernel using Triton, achieving up to 3.33 and 2.08 times speedups on HunyuanVideo and Wan 2.1, respectively, while maintaining high generation quality. We release Rectified SpaAttn as open-source at https://github.com/BienLuky/Rectified-SpaAttn .
Problem

Research questions and friction points this paper is trying to address.

Addresses systematic biases in attention allocation for video generation
Rectifies excessive focus on critical tokens in sparse attention mechanisms
Solves complete neglect of non-critical tokens causing attention weight loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rectified SpaAttn rectifies attention allocation biases
Isolated-Pooling Attention Reallocation for critical tokens
Gain-Aware Pooling Rectification for non-critical tokens
🔎 Similar Papers
No similar papers found.
Xuewen Liu
Xuewen Liu
Institute of Automation, Chinese Academy of Sciences
Model compression
Z
Zhikai Li
Institute of Automation, Chinese Academy of Sciences
J
Jing Zhang
Institute of Automation, Chinese Academy of Sciences
M
Mengjuan Chen
Institute of Automation, Chinese Academy of Sciences
Qingyi Gu
Qingyi Gu
Institute of Automation, Chinese Academy of Sciences
High-speed visioncell analysis