🤖 AI Summary
Existing video GPT models struggle to jointly model spatiotemporal dynamics in autonomous driving scenarios, leading to inaccurate future event prediction, short generation horizons (typically <20 seconds), and severe temporal drift. To address this, we propose a video-GPT–style world model specifically designed for autonomous driving, which breaks away from conventional 1D autoregressive modeling by introducing a novel spatiotemporal joint modeling framework: frame-level state prediction ensures temporal coherence, while token-level prediction captures fine-grained spatial structure. We further incorporate dynamic masking and loss reweighting to suppress long-horizon error accumulation. The method integrates multi-scale visual tokenization, spatiotemporally decoupled prediction, and a GPT-based architecture. Experiments demonstrate that our model generates high-fidelity, controllable driving videos exceeding 40 seconds—more than double the horizon of state-of-the-art methods—with显著 improvements in visual quality and spatiotemporal consistency.
📝 Abstract
Recent successes in autoregressive (AR) generation models, such as the GPT series in natural language processing, have motivated efforts to replicate this success in visual tasks. Some works attempt to extend this approach to autonomous driving by building video-based world models capable of generating realistic future video sequences and predicting ego states. However, prior works tend to produce unsatisfactory results, as the classic GPT framework is designed to handle 1D contextual information, such as text, and lacks the inherent ability to model the spatial and temporal dynamics essential for video generation. In this paper, we present DrivingWorld, a GPT-style world model for autonomous driving, featuring several spatial-temporal fusion mechanisms. This design enables effective modeling of both spatial and temporal dynamics, facilitating high-fidelity, long-duration video generation. Specifically, we propose a next-state prediction strategy to model temporal coherence between consecutive frames and apply a next-token prediction strategy to capture spatial information within each frame. To further enhance generalization ability, we propose a novel masking strategy and reweighting strategy for token prediction to mitigate long-term drifting issues and enable precise control. Our work demonstrates the ability to produce high-fidelity and consistent video clips of over 40 seconds in duration, which is over 2 times longer than state-of-the-art driving world models. Experiments show that, in contrast to prior works, our method achieves superior visual quality and significantly more accurate controllable future video generation. Our code is available at https://github.com/YvanYin/DrivingWorld.