🤖 AI Summary
Current video generation models suffer from a fundamental trade-off between poor generalization across multiple conditioning modalities and low inference efficiency. To address this, we propose VDOT, an efficient unified video generation framework. Its core innovation is a distribution-matching distillation paradigm: it employs computational optimal transport to align the geometric structures of real and generated score distributions—thereby avoiding zero-forcing behavior and gradient collapse inherent in KL-divergence-based distillation—while integrating discriminator-guided learning to enhance perceptual quality and training stability. Methodologically, VDOT incorporates few-step denoising (only four steps), discriminator-augmented distillation, and an automated data annotation filtering pipeline. Experiments demonstrate that VDOT matches or surpasses 100-step baseline models across diverse tasks—including action recognition and text-to-video generation—while achieving a 25× speedup in inference. This significantly advances the practical deployment of video generation systems.
📝 Abstract
The rapid development of generative models has significantly advanced image and video applications. Among these, video creation, aimed at generating videos under various conditions, has gained substantial attention. However, existing video creation models either focus solely on a few specific conditions or suffer from excessively long generation times due to complex model inference, making them impractical for real-world applications. To mitigate these issues, we propose an efficient unified video creation model, named VDOT. Concretely, we model the training process with the distribution matching distillation (DMD) paradigm. Instead of using the Kullback-Leibler (KL) minimization, we additionally employ a novel computational optimal transport (OT) technique to optimize the discrepancy between the real and fake score distributions. The OT distance inherently imposes geometric constraints, mitigating potential zero-forcing or gradient collapse issues that may arise during KL-based distillation within the few-step generation scenario, and thus, enhances the efficiency and stability of the distillation process. Further, we integrate a discriminator to enable the model to perceive real video data, thereby enhancing the quality of generated videos. To support training unified video creation models, we propose a fully automated pipeline for video data annotation and filtering that accommodates multiple video creation tasks. Meanwhile, we curate a unified testing benchmark, UVCBench, to standardize evaluation. Experiments demonstrate that our 4-step VDOT outperforms or matches other baselines with 100 denoising steps.