Towards One-step Causal Video Generation via Adversarial Self-Distillation

📅 2025-11-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing hybrid video generation models rely on sequential iterative denoising, leading to error accumulation and high inference latency. To address this, we propose an efficient causal video generation framework based on adversarial self-distillation, enabling a single model to support arbitrary-step (e.g., 1–2 step) inference without requiring repeated distillation. Our method introduces distribution-level output alignment and teacher-student internal consistency constraints, coupled with first-frame enhancement to suppress error propagation. Additionally, multi-step skip sampling is adopted to improve training stability. Evaluated on the VBench benchmark, our approach achieves state-of-the-art performance in both one-step and two-step video generation tasks, simultaneously delivering ultra-fast inference speed and high visual fidelity.

Technology Category

Application Category

📝 Abstract
Recent hybrid video generation models combine autoregressive temporal dynamics with diffusion-based spatial denoising, but their sequential, iterative nature leads to error accumulation and long inference times. In this work, we propose a distillation-based framework for efficient causal video generation that enables high-quality synthesis with extremely limited denoising steps. Our approach builds upon the Distribution Matching Distillation (DMD) framework and proposes a novel Adversarial Self-Distillation (ASD) strategy, which aligns the outputs of the student model's n-step denoising process with its (n+1)-step version at the distribution level. This design provides smoother supervision by bridging small intra-student gaps and more informative guidance by combining teacher knowledge with locally consistent student behavior, substantially improving training stability and generation quality in extremely few-step scenarios (e.g., 1-2 steps). In addition, we present a First-Frame Enhancement (FFE) strategy, which allocates more denoising steps to the initial frames to mitigate error propagation while applying larger skipping steps to later frames. Extensive experiments on VBench demonstrate that our method surpasses state-of-the-art approaches in both one-step and two-step video generation. Notably, our framework produces a single distilled model that flexibly supports multiple inference-step settings, eliminating the need for repeated re-distillation and enabling efficient, high-quality video synthesis.
Problem

Research questions and friction points this paper is trying to address.

Achieve efficient one-step video generation
Reduce error accumulation in autoregressive models
Enable flexible multi-step inference with single model
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial Self-Distillation aligns student denoising steps
First-Frame Enhancement mitigates error propagation in videos
Single distilled model supports multiple inference-step settings
🔎 Similar Papers