Accelerating Video Generation Inference with Sequential-Parallel 3D Positional Encoding Using a Global Time Index

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high memory consumption and first-frame latency inherent in DiT-based video generation models due to their full spatiotemporal attention mechanism, which hinders long-video synthesis and real-time inference. To overcome these limitations, the authors propose a causal autoregressive video generation framework that incorporates sequence-parallel inference and introduces an optimized Causal Rotary Position Embedding (Causal-RoPE SP), which reduces inter-device communication through localized computation. By further integrating operator fusion and RoPE precomputation, the method significantly enhances inference efficiency. Evaluated on an 8×A800 GPU cluster, the approach achieves sub-second first-frame latency and near-real-time inference, accelerating the generation of 5-second 480p videos by 1.58× while preserving generation quality.

Technology Category

Application Category

📝 Abstract
Diffusion Transformer (DiT)-based video generation models inherently suffer from bottlenecks in long video synthesis and real-time inference, which can be attributed to the use of full spatiotemporal attention. Specifically, this mechanism leads to explosive O(N^2) memory consumption and high first-frame latency. To address these issues, we implement system-level inference optimizations for a causal autoregressive video generation pipeline. We adapt the Self-Forcing causal autoregressive framework to sequence parallel inference and implement a sequence-parallel variant of the causal rotary position embedding which we refer to as Causal-RoPE SP. This adaptation enables localized computation and reduces cross-rank communication in sequence parallel execution. In addition, computation and communication pipelines are optimized through operator fusion and RoPE precomputation. Experiments conducted on an eight GPU A800 cluster show that the optimized system achieves comparable generation quality, sub-second first-frame latency, and near real-time inference speed. For generating five second 480P videos, a 1.58x speedup is achieved, thereby providing effective support for real-time interactive applications.
Problem

Research questions and friction points this paper is trying to address.

video generation
real-time inference
spatiotemporal attention
memory bottleneck
first-frame latency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sequence Parallelism
Causal-RoPE SP
Diffusion Transformer
Real-time Video Generation
Operator Fusion
🔎 Similar Papers
No similar papers found.