🤖 AI Summary
This work addresses the issue of motion degradation and temporal inconsistency in autoregressive video diffusion models for long-form video generation, which arises from indiscriminate use of historical temporal memory. The authors propose a structured temporal memory mechanism that categorizes historical context into three distinct roles: globally stable, short-term coherent, and dynamically structure-guided. By dynamically selecting keyframes for attention computation based on these roles, the method reveals the inherent heterogeneity of temporal memory for the first time. Combined with a relaxed enforcement training strategy, this approach effectively mitigates error accumulation while reducing attention overhead. Experiments demonstrate significant improvements in motion dynamics and temporal consistency on VBench-Long, underscoring the critical role of structured memory in scalable long-video generation.
📝 Abstract
Autoregressive (AR) video diffusion has recently emerged as a promising paradigm for long video generation, enabling causal synthesis beyond the limits of bidirectional models. To address training-inference mismatch, a series of self-forcing strategies have been proposed to improve rollout stability by conditioning the model on its own predictions during training. While these approaches substantially mitigate exposure bias, extending generation to minute-scale horizons remains challenging due to progressive temporal degradation. In this work, we show that this limitation is not primarily caused by insufficient memory, but by how temporal memory is utilised during inference. Through empirical analysis, we find that increasing memory does not consistently improve long-horizon generation, and that the temporal placement of historical context significantly influences motion dynamics while leaving visual quality largely unchanged. These findings suggest that temporal memory should not be treated as a homogeneous buffer. Motivated by this insight, we introduce Relax Forcing, a structured temporal memory mechanism for AR diffusion. Instead of attending to the dense generated history, Relax Forcing decomposes temporal context into three functional roles: Sink for global stability, Tail for short-term continuity, and dynamically selected History for structural motion guidance, and selectively incorporates only the most relevant past information. This design mitigates error accumulation during extrapolation while preserving motion evolution. Experiments on VBench-Long demonstrate that Relax Forcing improves motion dynamics and overall temporal consistency while reducing attention overhead. Our results suggest that structured temporal memory is essential for scalable long video generation, complementing existing forcing-based training strategies.