🤖 AI Summary
Video diffusion models (VDMs) compute attention over the 3D spatiotemporal domain, leading to cubic growth in memory and inter-GPU communication overhead—severely limiting distributed inference efficiency. To address this, we propose Latent Parallelism: the first parallelization paradigm specifically designed for VDMs. It operates exclusively in the compact latent space, enabling efficient distributed generation via dynamic rotation of spatiotemporal tiling dimensions, block-aligned overlapping partitioning, and position-aware latent-space reconstruction. Crucially, Latent Parallelism is non-intrusive—it integrates seamlessly with existing parallel frameworks without modifying model architecture or training procedures. Experiments across three benchmarks demonstrate up to 97% reduction in inter-GPU communication volume, while preserving generation quality comparable to baseline methods. The approach significantly improves serving throughput and scalability, making large-scale VDM inference practically viable.
📝 Abstract
Video diffusion models (VDMs) perform attention computation over the 3D spatio-temporal domain. Compared to large language models (LLMs) processing 1D sequences, their memory consumption scales cubically, necessitating parallel serving across multiple GPUs. Traditional parallelism strategies partition the computational graph, requiring frequent high-dimensional activation transfers that create severe communication bottlenecks. To tackle this issue, we exploit the local spatio-temporal dependencies inherent in the diffusion denoising process and propose Latent Parallelism (LP), the first parallelism strategy tailored for VDM serving. extcolor{black}{LP decomposes the global denoising problem into parallelizable sub-problems by dynamically rotating the partitioning dimensions (temporal, height, and width) within the compact latent space across diffusion timesteps, substantially reducing the communication overhead compared to prevailing parallelism strategies.} To ensure generation quality, we design a patch-aligned overlapping partition strategy that matches partition boundaries with visual patches and a position-aware latent reconstruction mechanism for smooth stitching. Experiments on three benchmarks demonstrate that LP reduces communication overhead by up to 97% over baseline methods while maintaining comparable generation quality. As a non-intrusive plug-in paradigm, LP can be seamlessly integrated with existing parallelism strategies, enabling efficient and scalable video generation services.