🤖 AI Summary
This work proposes the first unified architecture for autonomous driving that integrates vision-language model (VLM)-driven scene understanding, trajectory planning, and conditional future image generation within a single end-to-end framework. By jointly training these components and enabling closed-loop iterative refinement, the system achieves holistic performance improvements over conventional modular pipelines that treat perception, prediction, and planning as separate stages. A key innovation lies in introducing trajectory-conditioned image generation, which allows explicit coupling between planned actions and anticipated visual futures. The study also provides a systematic comparison of discrete versus continuous representations for future prediction and their impact on driving behavior. Evaluated on the Bench2Drive benchmark, the method reduces L2 trajectory error by 5.9% and collision rate by 9.2% while generating high-fidelity future images, significantly outperforming current state-of-the-art approaches.
📝 Abstract
World models have become central to autonomous driving, where accurate scene understanding and future prediction are crucial for safe control. Recent work has explored using vision-language models (VLMs) for planning, yet existing approaches typically treat perception, prediction, and planning as separate modules. We propose UniDrive-WM, a unified VLM-based world model that jointly performs driving-scene understanding, trajectory planning, and trajectory-conditioned future image generation within a single architecture. UniDrive-WM's trajectory planner predicts a future trajectory, which conditions a VLM-based image generator to produce plausible future frames. These predictions provide additional supervisory signals that enhance scene understanding and iteratively refine trajectory generation. We further compare discrete and continuous output representations for future image prediction, analyzing their influence on downstream driving performance. Experiments on the challenging Bench2Drive benchmark show that UniDrive-WM produces high-fidelity future images and improves planning performance by 5.9% in L2 trajectory error and 9.2% in collision rate over the previous best method. These results demonstrate the advantages of tightly integrating VLM-driven reasoning, planning, and generative world modeling for autonomous driving. The project page is available at https://unidrive-wm.github.io/UniDrive-WM .