π€ AI Summary
To address the challenge of precisely modeling spatiotemporal dynamics in videos using natural language, this paper proposes Next-Clip Diffusionβa novel autoregressive pretraining paradigm that treats video as a visual language. Unlike conventional token-level modeling, our approach unifies long-horizon spatiotemporal modeling and short-horizon generation via cross-clip noise prediction, ensuring temporal coherence and spatial consistency. Technically, it integrates diffusion modeling, video-specific autoregressive architecture, and multi-task adaptive fine-tuning. On the Physics-IQ video prediction benchmark, it achieves 34.97 (SOTA), substantially outperforming Kling (23.64) and Wan (20.89). Moreover, the learned representations generalize effectively across six diverse downstream video generation and understanding tasks, demonstrating strong universal representational capability.
π Abstract
GPT has shown its remarkable success in natural language processing. However, the language sequence is not sufficient to describe spatial-temporal details in the visual world. Alternatively, the video sequence is good at capturing such details. Motivated by this fact, we propose a concise Video-GPT in this paper by treating video as new language for visual world modeling. By analogy to next token prediction in GPT, we introduce a novel next clip diffusion paradigm for pretraining Video-GPT. Different from the previous works, this distinct paradigm allows Video-GPT to tackle both short-term generation and long-term prediction, by autoregressively denoising the noisy clip according to the clean clips in the history. Extensive experiments show our Video-GPT achieves the state-of-the-art performance on video prediction, which is the key factor towards world modeling (Physics-IQ Benchmark: Video-GPT 34.97 vs. Kling 23.64 vs. Wan 20.89). Moreover, it can be well adapted on 6 mainstream video tasks in both video generation and understanding, showing its great generalization capacity in downstream. The project page is at https://Video-GPT.github.io.