SVG-EAR: Parameter-Free Linear Compensation for Sparse Video Generation via Error-aware Routing

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost of dense attention in diffusion-based Transformer video generation by proposing a training-free sparsification method. It identifies attention blocks with highly similar keys and values through semantic clustering, employs cluster centroids for parameter-free linear compensation, and introduces a lightweight error probe to enable error-aware routing that prioritizes blocks with the highest error-to-cost ratio. The study establishes, for the first time, a theoretical connection between clustering quality and attention reconstruction error, achieving a superior quality-efficiency trade-off. Experiments on Wan2.2 and HunyuanVideo yield PSNR scores of 29.759 and 31.043, respectively, with speedups of 1.77× and 1.93×, significantly outperforming existing approaches and establishing a new Pareto frontier.

Technology Category

Application Category

📝 Abstract
Diffusion Transformers (DiTs) have become a leading backbone for video generation, yet their quadratic attention cost remains a major bottleneck. Sparse attention reduces this cost by computing only a subset of attention blocks. However, prior methods often either drop the remaining blocks, which incurs information loss, or rely on learned predictors to approximate them, introducing training overhead and potential output distribution shifting. In this paper, we show that the missing contributions can be recovered without training: after semantic clustering, keys and values within each block exhibit strong similarity and can be well summarized by a small set of cluster centroids. Based on this observation, we introduce SVG-EAR, a parameter-free linear compensation branch that uses the centroid to approximate skipped blocks and recover their contributions. While centroid compensation is accurate for most blocks, it can fail on a small subset. Standard sparsification typically selects blocks by attention scores, which indicate where the model places its attention mass, but not where the approximation error would be largest. SVG-EAR therefore performs error-aware routing: a lightweight probe estimates the compensation error for each block, and we compute exactly the blocks with the highest error-to-cost ratio while compensating for skipped blocks. We provide theoretical guarantees that relate attention reconstruction error to clustering quality, and empirically show that SVG-EAR improves the quality-efficiency trade-off and increases throughput at the same generation fidelity on video diffusion tasks. Overall, SVG-EAR establishes a clear Pareto frontier over prior approaches, achieving up to 1.77$\times$ and 1.93$\times$ speedups while maintaining PSNRs of up to 29.759 and 31.043 on Wan2.2 and HunyuanVideo, respectively.
Problem

Research questions and friction points this paper is trying to address.

sparse video generation
diffusion transformers
attention approximation
parameter-free compensation
error-aware routing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse Attention
Error-aware Routing
Parameter-free Compensation
Diffusion Transformers
Video Generation
🔎 Similar Papers
No similar papers found.