🤖 AI Summary
This work addresses the temporal inconsistency and quality degradation in autoregressive diffusion-based video generation caused by error accumulation. To mitigate these issues, the authors propose a hierarchical denoising framework that rethinks the conventional frame-by-frame generation order. Instead, it adopts a global causal generation strategy organized by noise levels, performing cross-frame contextual modeling within each noise level to preserve temporal coherence and suppress error propagation. Key technical components include same-level causal attention, forward KL regularization to maintain motion diversity, and self-rolling distillation for efficient inference. Evaluated on VBench with 20-second video generation, the method achieves state-of-the-art overall performance, the lowest temporal drift, and a 1.8× speedup in inference time.
📝 Abstract
Autoregressive (AR) diffusion offers a promising framework for generating videos of theoretically infinite length. However, a major challenge is maintaining temporal continuity while preventing the progressive quality degradation caused by error accumulation. To ensure continuity, existing methods typically condition on highly denoised contexts; yet, this practice propagates prediction errors with high certainty, thereby exacerbating degradation. In this paper, we argue that a highly clean context is unnecessary. Drawing inspiration from bidirectional diffusion models, which denoise frames at a shared noise level while maintaining coherence, we propose that conditioning on context at the same noise level as the current block provides sufficient signal for temporal consistency while effectively mitigating error propagation. Building on this insight, we propose HiAR, a hierarchical denoising framework that reverses the conventional generation order: instead of completing each block sequentially, it performs causal generation across all blocks at every denoising step, so that each block is always conditioned on context at the same noise level. This hierarchy naturally admits pipelined parallel inference, yielding a 1.8 wall-clock speedup in our 4-step setting. We further observe that self-rollout distillation under this paradigm amplifies a low-motion shortcut inherent to the mode-seeking reverse-KL objective. To counteract this, we introduce a forward-KL regulariser in bidirectional-attention mode, which preserves motion diversity for causal inference without interfering with the distillation loss. On VBench (20s generation), HiAR achieves the best overall score and the lowest temporal drift among all compared methods.