🤖 AI Summary
This work addresses the limitation of existing driving world models, which lack a unified, shareable motion representation between visual generation and motion planning, thereby constraining planning accuracy. To bridge this gap, we propose WorldDrive, a novel framework that introduces trajectory tokens to construct a trajectory-aware world model, unifying visual and motion representations for joint optimization of scene generation and real-time planning. The approach features a vision-motion joint encoder, a multimodal planner, and a future-aware reward mechanism that leverages latent representations from a frozen world model to online-select optimal trajectories. Experiments demonstrate that WorldDrive achieves state-of-the-art planning performance among vision-only methods on the NAVSIM, NAVSIM-v2, and nuScenes benchmarks, while also enabling high-quality, action-controllable video generation.
📝 Abstract
End-to-end autonomous driving aims to generate safe and plausible planning policies from raw sensor input. Driving world models have shown great potential in learning rich representations by predicting the future evolution of a driving scene. However, existing driving world models primarily focus on visual scene representation, and motion representation is not explicitly designed to be planner-shared and inheritable, leaving a schism between the optimization of visual scene generation and the requirements of precise motion planning. We present WorldDrive, a holistic framework that couples scene generation and real-time planning via unifying vision and motion representation. We first introduce a Trajectory-aware Driving World Model, which conditions on a trajectory vocabulary to enforce consistency between visual dynamics and motion intentions, enabling the generation of diverse and plausible future scenes conditioned on a specific trajectory. We transfer the vision and motion encoders to a downstream Multi-modal Planner, ensuring the driving policy operates on mature representations pre-optimized by scene generation. A simple interaction between motion representation, visual representation, and ego status can generate high-quality, multi-modal trajectories. Furthermore, to exploit the world model's foresight, we propose a Future-aware Rewarder, which distills future latent representation from the frozen world model to evaluate and select optimal trajectories in real-time. Extensive experiments on the NAVSIM, NAVSIM-v2, and nuScenes benchmarks demonstrate that WorldDrive achieves leading planning performance among vision-only methods while maintaining high-fidelity action-controlled video generation capabilities, providing strong evidence for the effectiveness of unifying vision and motion representation for robust autonomous driving.