X-World: Controllable Ego-Centric Multi-Camera World Models for Scalable End-to-End Driving

πŸ“… 2026-03-20
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the lack of scalable and reproducible simulation-based evaluation for end-to-end autonomous driving, which currently relies heavily on costly and scenario-limited real-world road testing. To this end, the authors propose an action-conditioned, multi-camera generative world model that synthesizes temporally coherent and geometrically consistent future video sequences conditioned on historical multi-view observations and future action trajectories. The model explicitly enforces cross-view geometric consistency and temporal coherence, enabling semantic editing of dynamic traffic participants, static road elements, and appearance attributes such as weather and time of day. It achieves precise action-conditioned video generation and style transfer, significantly outperforming existing methods in viewpoint consistency, dynamic stability, and control fidelity. This approach provides a robust, scalable foundation for simulating and evaluating end-to-end driving policies.

Technology Category

Application Category

πŸ“ Abstract
Scalable and reliable evaluation is increasingly critical in the end-to-end era of autonomous driving, where vision--language--action (VLA) policies directly map raw sensor streams to driving actions. Yet, current evaluation pipelines still rely heavily on real-world road testing, which is costly, biased toward limited scenario coverage, and difficult to reproduce. These challenges motivate a real-world simulator that can generate realistic future observations under proposed actions, while remaining controllable and stable over long horizons. We present X-World, an action-conditioned multi-camera generative world model that simulates future observations directly in video space. Given synchronized multi-view camera history and a future action sequence, X-World generates future multi-camera video streams that follow the commanded actions. To ensure reproducible and editable scene rollouts, X-World further supports optional controls over dynamic traffic agents and static road elements, and retains a text-prompt interface for appearance-level control (e.g., weather and time of day). Beyond world simulation, X-World also enables video style transfer by conditioning on appearance prompts while preserving the underlying action and scene dynamics. At the core of X-World is a multi-view latent video generator designed to explicitly encourage cross-view geometric consistency and temporal coherence under diverse control signals. Experiments show that X-World achieves high-quality multi-view video generation with (i) strong view consistency across cameras, (ii) stable temporal dynamics over long rollouts, and (iii) high controllability with strict action following and faithful adherence to optional scene controls. These properties make X-World a practical foundation for scalable and reproducible evaluation.
Problem

Research questions and friction points this paper is trying to address.

autonomous driving
world model
simulation
evaluation
multi-camera
Innovation

Methods, ideas, or system contributions that make the work stand out.

world model
multi-camera video generation
action-conditioned simulation
controllable scene editing
cross-view consistency
πŸ”Ž Similar Papers
No similar papers found.
C
Chaoda Zheng
S
Sean Li
J
Jinhao Deng
Zhennan Wang
Zhennan Wang
Peng Cheng Lab
neural network designdeep learningcomputer vision
S
Shijia Chen
L
Liqiang Xiao
Ziheng Chi
Ziheng Chi
ETH ZΓΌrich
H
Hongbin Lin
Kangjie Chen
Kangjie Chen
Nanyang Technological University
Trustworthy AIRed-teamingBackdoor AttacksLLM-based Agents
B
Boyang Wang
Y
Yu Zhang
X
Xianming Liu