Communication-Efficient Serving for Video Diffusion Models with Latent Parallelism

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Video diffusion models (VDMs) compute attention over the 3D spatiotemporal domain, leading to cubic growth in memory and inter-GPU communication overhead—severely limiting distributed inference efficiency. To address this, we propose Latent Parallelism: the first parallelization paradigm specifically designed for VDMs. It operates exclusively in the compact latent space, enabling efficient distributed generation via dynamic rotation of spatiotemporal tiling dimensions, block-aligned overlapping partitioning, and position-aware latent-space reconstruction. Crucially, Latent Parallelism is non-intrusive—it integrates seamlessly with existing parallel frameworks without modifying model architecture or training procedures. Experiments across three benchmarks demonstrate up to 97% reduction in inter-GPU communication volume, while preserving generation quality comparable to baseline methods. The approach significantly improves serving throughput and scalability, making large-scale VDM inference practically viable.

Technology Category

Application Category

📝 Abstract
Video diffusion models (VDMs) perform attention computation over the 3D spatio-temporal domain. Compared to large language models (LLMs) processing 1D sequences, their memory consumption scales cubically, necessitating parallel serving across multiple GPUs. Traditional parallelism strategies partition the computational graph, requiring frequent high-dimensional activation transfers that create severe communication bottlenecks. To tackle this issue, we exploit the local spatio-temporal dependencies inherent in the diffusion denoising process and propose Latent Parallelism (LP), the first parallelism strategy tailored for VDM serving. extcolor{black}{LP decomposes the global denoising problem into parallelizable sub-problems by dynamically rotating the partitioning dimensions (temporal, height, and width) within the compact latent space across diffusion timesteps, substantially reducing the communication overhead compared to prevailing parallelism strategies.} To ensure generation quality, we design a patch-aligned overlapping partition strategy that matches partition boundaries with visual patches and a position-aware latent reconstruction mechanism for smooth stitching. Experiments on three benchmarks demonstrate that LP reduces communication overhead by up to 97% over baseline methods while maintaining comparable generation quality. As a non-intrusive plug-in paradigm, LP can be seamlessly integrated with existing parallelism strategies, enabling efficient and scalable video generation services.
Problem

Research questions and friction points this paper is trying to address.

Reduces communication bottlenecks in video diffusion model serving
Optimizes parallelism for 3D spatio-temporal attention computations
Maintains generation quality while minimizing GPU communication overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent Parallelism reduces communication overhead in video diffusion models
Dynamic rotation of partitioning dimensions in latent space enhances parallelization
Patch-aligned overlapping strategy ensures generation quality with smooth stitching
🔎 Similar Papers
No similar papers found.
Z
Zhiyuan Wu
Department of Computer Science and Technology, Tsinghua University
S
Shuai Wang
Zhongguancun Laboratory
L
Li Chen
Zhongguancun Laboratory
K
Kaihui Gao
Zhongguancun Laboratory
D
Dan Li
Department of Computer Science and Technology, Tsinghua University, Zhongguancun Laboratory
Yanyu Ren
Yanyu Ren
Tsinghua University
ML SystemsAI for NetworkCS Education
Q
Qiming Zhang
ZTE Corporation
Y
Yong Wang
ZTE Corporation