π€ AI Summary
This work addresses the lack of scalable and reproducible simulation-based evaluation for end-to-end autonomous driving, which currently relies heavily on costly and scenario-limited real-world road testing. To this end, the authors propose an action-conditioned, multi-camera generative world model that synthesizes temporally coherent and geometrically consistent future video sequences conditioned on historical multi-view observations and future action trajectories. The model explicitly enforces cross-view geometric consistency and temporal coherence, enabling semantic editing of dynamic traffic participants, static road elements, and appearance attributes such as weather and time of day. It achieves precise action-conditioned video generation and style transfer, significantly outperforming existing methods in viewpoint consistency, dynamic stability, and control fidelity. This approach provides a robust, scalable foundation for simulating and evaluating end-to-end driving policies.
π Abstract
Scalable and reliable evaluation is increasingly critical in the end-to-end era of autonomous driving, where vision--language--action (VLA) policies directly map raw sensor streams to driving actions. Yet, current evaluation pipelines still rely heavily on real-world road testing, which is costly, biased toward limited scenario coverage, and difficult to reproduce. These challenges motivate a real-world simulator that can generate realistic future observations under proposed actions, while remaining controllable and stable over long horizons. We present X-World, an action-conditioned multi-camera generative world model that simulates future observations directly in video space. Given synchronized multi-view camera history and a future action sequence, X-World generates future multi-camera video streams that follow the commanded actions. To ensure reproducible and editable scene rollouts, X-World further supports optional controls over dynamic traffic agents and static road elements, and retains a text-prompt interface for appearance-level control (e.g., weather and time of day). Beyond world simulation, X-World also enables video style transfer by conditioning on appearance prompts while preserving the underlying action and scene dynamics. At the core of X-World is a multi-view latent video generator designed to explicitly encourage cross-view geometric consistency and temporal coherence under diverse control signals. Experiments show that X-World achieves high-quality multi-view video generation with (i) strong view consistency across cameras, (ii) stable temporal dynamics over long rollouts, and (iii) high controllability with strict action following and faithful adherence to optional scene controls. These properties make X-World a practical foundation for scalable and reproducible evaluation.