OSV: One Step is Enough for High-Quality Image to Video Generation

📅 2024-09-17
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
Existing video diffusion models rely on multi-step iterative sampling, resulting in high computational overhead and slow inference. This paper introduces the first single-step image-to-video generation framework, leveraging a two-stage training strategy that integrates consistency distillation with GAN-based optimization. In Stage I, a single-step generator is initialized via consistency distillation from a pre-trained diffusion model. In Stage II, a novel spatiotemporal discriminator—operating directly on pixel space without requiring video latent decoding—is introduced; it enables joint adversarial training to enhance temporal coherence and fine-grained visual quality. Evaluated on OpenWebVid-1M, our method achieves an FVD of 171.15 using only one denoising step—substantially outperforming AnimateLCM (FVD=184.79, 8 steps) and approaching the performance of Stable Video Diffusion (FVD=156.94, 25 steps). To our knowledge, this is the first work to achieve high-fidelity video synthesis under strict single-step generation constraints.

Technology Category

Application Category

📝 Abstract
Video diffusion models have shown great potential in generating high-quality videos, making them an increasingly popular focus. However, their inherent iterative nature leads to substantial computational and time costs. While efforts have been made to accelerate video diffusion by reducing inference steps (through techniques like consistency distillation) and GAN training (these approaches often fall short in either performance or training stability). In this work, we introduce a two-stage training framework that effectively combines consistency distillation with GAN training to address these challenges. Additionally, we propose a novel video discriminator design, which eliminates the need for decoding the video latents and improves the final performance. Our model is capable of producing high-quality videos in merely one-step, with the flexibility to perform multi-step refinement for further performance enhancement. Our quantitative evaluation on the OpenWebVid-1M benchmark shows that our model significantly outperforms existing methods. Notably, our 1-step performance(FVD 171.15) exceeds the 8-step performance of the consistency distillation based method, AnimateLCM (FVD 184.79), and approaches the 25-step performance of advanced Stable Video Diffusion (FVD 156.94).
Problem

Research questions and friction points this paper is trying to address.

Reduce computational costs in video diffusion models
Improve training stability and performance in video generation
Enable high-quality one-step video generation with refinement options
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage training combines consistency distillation and GAN
Novel video discriminator avoids decoding video latents
One-step high-quality video generation with multi-step refinement
🔎 Similar Papers
No similar papers found.