🤖 AI Summary
To address the low inference efficiency of Diffusion Transformers (DiTs) in video generation—caused by computationally expensive 3D attention and excessive sampling steps—this work introduces two core innovations. First, we identify a pervasive tiling redundancy pattern in video 3D attention maps and propose Attention Tile, a linear-complexity sparse 3D attention mechanism. Second, we design a three-stage joint training framework integrating multi-segment consistency distillation with sequence-parallel distributed inference. Evaluated on 720p video generation, our method achieves 7.4–7.8× speedup over Open-Sora-Plan (29/93 frames) using only 0.1% of its pretraining data. Further, four-GPU distributed inference yields an additional 3.91× acceleration, with negligible degradation in VBench performance. These contributions collectively enable highly efficient, high-fidelity video generation without compromising quality.
📝 Abstract
Despite the promise of synthesizing high-fidelity videos, Diffusion Transformers (DiTs) with 3D full attention suffer from expensive inference due to the complexity of attention computation and numerous sampling steps. For example, the popular Open-Sora-Plan model consumes more than 9 minutes for generating a single video of 29 frames. This paper addresses the inefficiency issue from two aspects: 1) Prune the 3D full attention based on the redundancy within video data; We identify a prevalent tile-style repetitive pattern in the 3D attention maps for video data, and advocate a new family of sparse 3D attention that holds a linear complexity w.r.t. the number of video frames. 2) Shorten the sampling process by adopting existing multi-step consistency distillation; We split the entire sampling trajectory into several segments and perform consistency distillation within each one to activate few-step generation capacities. We further devise a three-stage training pipeline to conjoin the low-complexity attention and few-step generation capacities. Notably, with 0.1% pretraining data, we turn the Open-Sora-Plan-1.2 model into an efficient one that is 7.4x -7.8x faster for 29 and 93 frames 720p video generation with a marginal performance trade-off in VBench. In addition, we demonstrate that our approach is amenable to distributed inference, achieving an additional 3.91x speedup when running on 4 GPUs with sequence parallelism.