🤖 AI Summary
Autoregressive long video generation is prone to temporal drift and semantic distortion during inference due to error propagation. This work proposes a dynamic inference-time pruning method that operates entirely within the latent space, without modifying the model architecture or training procedure. By detecting deviations in latent token representations, the method identifies and discards unstable tokens, thereby interrupting the propagation of corrupted contextual information. To the best of our knowledge, this is the first approach to mitigate temporal drift solely at inference time, significantly enhancing long-term temporal consistency in generated videos while effectively suppressing error accumulation—all without requiring additional training or architectural changes.
📝 Abstract
Auto-regressive video generation enables long video synthesis by iteratively conditioning each new batch of frames on previously generated content. However, recent work has shown that such pipelines suffer from severe temporal drift, where errors accumulate and amplify over long horizons. We hypothesize that this drift does not primarily stem from insufficient model capacity, but rather from inference-time error propagation. Specifically, we contend that drift arises from the uncontrolled reuse of corrupted latent conditioning tokens during auto-regressive inference. To correct this accumulation of errors, we propose a simple, inference-time method that mitigates temporal drift by identifying and removing unstable latent tokens before they are reused for conditioning. For this purpose, we define unstable tokens as latent tokens whose representations deviate significantly from those of the previously generated batch, indicating potential corruption or semantic drift. By explicitly removing corrupted latent tokens from the auto-regressive context, rather than modifying entire spatial regions or model parameters, our method prevents unreliable latent information from influencing future generation steps. As a result, it significantly improves long-horizon temporal consistency without modifying the model architecture, training procedure, or leaving latent space.