🤖 AI Summary
Long-video generation faces two key bottlenecks: diffusion models suffer from prohibitive computational cost due to Transformer-based architectures, while autoregressive methods degrade catastrophically beyond the teacher model’s training length due to accumulated latent-space errors. This paper proposes a Self-Enforced training framework that eliminates reliance on long-video teacher supervision; instead, the teacher model iteratively refines its own generated segments via internal feedback, mitigating error propagation. Integrated with segment-wise sampling, latent-space consistency optimization, and knowledge distillation, the approach overcomes positional encoding length limitations. Experiments generate high-fidelity videos up to 4 minutes 15 seconds—covering 99.9% of the benchmark positional encoding capacity—achieving over 20× length extension and >50× speedup versus baselines. The method establishes new state-of-the-art performance on both standard and newly introduced long-video benchmarks.
📝 Abstract
Diffusion models have revolutionized image and video generation, achieving unprecedented visual quality. However, their reliance on transformer architectures incurs prohibitively high computational costs, particularly when extending generation to long videos. Recent work has explored autoregressive formulations for long video generation, typically by distilling from short-horizon bidirectional teachers. Nevertheless, given that teacher models cannot synthesize long videos, the extrapolation of student models beyond their training horizon often leads to pronounced quality degradation, arising from the compounding of errors within the continuous latent space. In this paper, we propose a simple yet effective approach to mitigate quality degradation in long-horizon video generation without requiring supervision from long-video teachers or retraining on long video datasets. Our approach centers on exploiting the rich knowledge of teacher models to provide guidance for the student model through sampled segments drawn from self-generated long videos. Our method maintains temporal consistency while scaling video length by up to 20x beyond teacher's capability, avoiding common issues such as over-exposure and error-accumulation without recomputing overlapping frames like previous methods. When scaling up the computation, our method shows the capability of generating videos up to 4 minutes and 15 seconds, equivalent to 99.9% of the maximum span supported by our base model's position embedding and more than 50x longer than that of our baseline model. Experiments on standard benchmarks and our proposed improved benchmark demonstrate that our approach substantially outperforms baseline methods in both fidelity and consistency. Our long-horizon videos demo can be found at https://self-forcing-plus-plus.github.io/