🤖 AI Summary
Autoregressive video diffusion models suffer from exposure bias: during training, they condition on ground-truth historical frames, whereas during inference, they must rely on their own noisy outputs—causing performance degradation. To address this, we propose Self-Forcing, a novel training paradigm that introduces causal autoregressive rollout and rolling KV caching during training for the first time, enabling strictly causal modeling. We design a video-level holistic loss function that directly optimizes end-to-end generation quality—not frame-wise reconstruction—and ensure training stability via gradient truncation and causal attention masking. Our method achieves perfect train-inference consistency, enabling real-time streaming generation with sub-second latency on a single GPU. Quantitative and qualitative evaluations demonstrate that our approach matches or surpasses non-causal, iterative diffusion models in video fidelity, motion coherence, and temporal consistency.
📝 Abstract
We introduce Self Forcing, a novel training paradigm for autoregressive video diffusion models. It addresses the longstanding issue of exposure bias, where models trained on ground-truth context must generate sequences conditioned on their own imperfect outputs during inference. Unlike prior methods that denoise future frames based on ground-truth context frames, Self Forcing conditions each frame's generation on previously self-generated outputs by performing autoregressive rollout with key-value (KV) caching during training. This strategy enables supervision through a holistic loss at the video level that directly evaluates the quality of the entire generated sequence, rather than relying solely on traditional frame-wise objectives. To ensure training efficiency, we employ a few-step diffusion model along with a stochastic gradient truncation strategy, effectively balancing computational cost and performance. We further introduce a rolling KV cache mechanism that enables efficient autoregressive video extrapolation. Extensive experiments demonstrate that our approach achieves real-time streaming video generation with sub-second latency on a single GPU, while matching or even surpassing the generation quality of significantly slower and non-causal diffusion models. Project website: http://self-forcing.github.io/