Survey of Video Diffusion Models: Foundations, Implementations, and Applications

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This survey systematically addresses key challenges in diffusion-based video generation: temporal inconsistency, high computational cost, and ethical risks. We propose a fine-grained methodological taxonomy—first unifying temporal consistency modeling, efficient training strategies, and ethical governance within a single analytical framework. Dedicated sections comprehensively review evaluation metrics, industrial-grade deployment pipelines, and engineering best practices. We further integrate emerging directions—including video representation learning, motion modeling, video super-resolution, and cross-modal synergies (e.g., video question answering and retrieval)—to bridge theoretical advances with real-world implementation. Compared to existing surveys, ours offers broader coverage, greater novelty, and deeper technical insight. To support reproducibility and community advancement, we open-source a structured literature repository. This work serves as an authoritative reference and practical guide for researchers and engineers in generative video research and development.

Technology Category

Application Category

📝 Abstract
Recent advances in diffusion models have revolutionized video generation, offering superior temporal consistency and visual quality compared to traditional generative adversarial networks-based approaches. While this emerging field shows tremendous promise in applications, it faces significant challenges in motion consistency, computational efficiency, and ethical considerations. This survey provides a comprehensive review of diffusion-based video generation, examining its evolution, technical foundations, and practical applications. We present a systematic taxonomy of current methodologies, analyze architectural innovations and optimization strategies, and investigate applications across low-level vision tasks such as denoising and super-resolution. Additionally, we explore the synergies between diffusionbased video generation and related domains, including video representation learning, question answering, and retrieval. Compared to the existing surveys (Lei et al., 2024a;b; Melnik et al., 2024; Cao et al., 2023; Xing et al., 2024c) which focus on specific aspects of video generation, such as human video synthesis (Lei et al., 2024a) or long-form content generation (Lei et al., 2024b), our work provides a broader, more updated, and more fine-grained perspective on diffusion-based approaches with a special section for evaluation metrics, industry solutions, and training engineering techniques in video generation. This survey serves as a foundational resource for researchers and practitioners working at the intersection of diffusion models and video generation, providing insights into both the theoretical frameworks and practical implementations that drive this rapidly evolving field. A structured list of related works involved in this survey is also available on https://github.com/Eyeline-Research/Survey-Video-Diffusion.
Problem

Research questions and friction points this paper is trying to address.

Addressing motion consistency in video diffusion models
Improving computational efficiency of video generation
Exploring ethical considerations in diffusion-based video synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion models enhance video generation quality
Taxonomy of methodologies and optimization strategies
Synergies with video representation learning explored
🔎 Similar Papers
No similar papers found.
Yimu Wang
Yimu Wang
University of Waterloo
Multi-modal Learning
Xuye Liu
Xuye Liu
University of Waterloo
Natural Language ProcessingLLMHuman-AI CollaborationMulti-modal Learning
W
Wei Pang
University of Waterloo
L
Li Ma
Netflix Eyeline Studios
S
Shuai Yuan
Duke University
P
Paul E. Debevec
Netflix Eyeline Studios
N
Ning Yu
Netflix Eyeline Studios