🤖 AI Summary
This work addresses the high computational cost of autoregressive video generation models and their tendency to suffer from temporal inconsistency due to error accumulation under low frame-rate training. To mitigate these issues, the authors propose two complementary techniques: Local Optimization (Local Opt.), which reduces computational overhead by optimizing tokens within a local temporal window, and Representation Continuity (ReCo), which enforces smoother latent representations through Lipschitz continuity constraints and a dedicated continuity loss to suppress error propagation. Evaluated across multiple video generation benchmarks, the proposed approach significantly outperforms existing baselines, achieving comparable or superior generation quality while reducing training costs by 50%.
📝 Abstract
Autoregressive models have shown superior performance and efficiency in image generation, but remain constrained by high computational costs and prolonged training times in video generation. In this study, we explore methods to accelerate training for autoregressive video generation models through empirical analyses. Our results reveal that while training on fewer video frames significantly reduces training time, it also exacerbates error accumulation and introduces inconsistencies in the generated videos. To address these issues, we propose a Local Optimization (Local Opt.) method, which optimizes tokens within localized windows while leveraging contextual information to reduce error propagation. Inspired by Lipschitz continuity, we propose a Representation Continuity (ReCo) strategy to improve the consistency of generated videos. ReCo utilizes continuity loss to constrain representation changes, improving model robustness and reducing error accumulation. Extensive experiments on class- and text-to-video datasets demonstrate that our approach achieves superior performance to the baseline while halving the training cost without sacrificing quality.