🤖 AI Summary
Existing video generation methods struggle to simultaneously achieve high fidelity, motion coherence, and low latency in streaming scenarios, with long-sequence generation particularly susceptible to error accumulation. This work proposes Diagonal Distillation, a novel approach that employs an asymmetric generation strategy—performing multi-step denoising in early stages and few-step synthesis in later stages—to explicitly align noise prediction with inference conditions, thereby mitigating exposure bias. The method further integrates implicit optical flow modeling and temporal context conditioning to enable efficient and coherent autoregressive video synthesis. Evaluated on 5-second video generation, the approach achieves a runtime of only 2.61 seconds (up to 31 FPS), yielding a 277.3× speedup over the original diffusion model while significantly improving visual quality and motion consistency in long sequences.
📝 Abstract
Large pretrained diffusion models have significantly enhanced the quality of generated videos, and yet their use in real-time streaming remains limited. Autoregressive models offer a natural framework for sequential frame synthesis but require heavy computation to achieve high fidelity. Diffusion distillation can compress these models into efficient few-step variants, but existing video distillation approaches largely adapt image-specific methods that neglect temporal dependencies. These techniques often excel in image generation but underperform in video synthesis, exhibiting reduced motion coherence, error accumulation over long sequences, and a latency-quality trade-off. We identify two factors that result in these limitations: insufficient utilization of temporal context during step reduction and implicit prediction of subsequent noise levels in next-chunk prediction (i.e., exposure bias). To address these issues, we propose Diagonal Distillation, which operates orthogonally to existing approaches and better exploits temporal information across both video chunks and denoising steps. Central to our approach is an asymmetric generation strategy: more steps early, fewer steps later. This design allows later chunks to inherit rich appearance information from thoroughly processed early chunks, while using partially denoised chunks as conditional inputs for subsequent synthesis. By aligning the implicit prediction of subsequent noise levels during chunk generation with the actual inference conditions, our approach mitigates error propagation and reduces oversaturation in long-range sequences. We further incorporate implicit optical flow modeling to preserve motion quality under strict step constraints. Our method generates a 5-second video in 2.61 seconds (up to 31 FPS), achieving a 277.3x speedup over the undistilled model.