Toward Physically Consistent Driving Video World Models under Challenging Trajectories

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing driving video generation methods, which often produce physically inconsistent outputs and visual artifacts when handling challenging or counterfactual trajectories. To overcome these issues, we propose PhyGenesis, a world model featuring a physics-conditioned trajectory corrector that rectifies invalid trajectories and a physics-enhanced video generator that synthesizes high-fidelity, multi-view driving videos. We construct a physically rich, heterogeneous dataset by combining real-world data with CARLA simulations and introduce a challenging trajectory learning strategy to improve model generalization. Experimental results demonstrate that PhyGenesis significantly outperforms current approaches under complex trajectories, generating videos that exhibit both high visual fidelity and strong physical consistency.

Technology Category

Application Category

📝 Abstract
Video generation models have shown strong potential as world models for autonomous driving simulation. However, existing approaches are primarily trained on real-world driving datasets, which mostly contain natural and safe driving scenarios. As a result, current models often fail when conditioned on challenging or counterfactual trajectories-such as imperfect trajectories generated by simulators or planning systems-producing videos with severe physical inconsistencies and artifacts. To address this limitation, we propose PhyGenesis, a world model designed to generate driving videos with high visual fidelity and strong physical consistency. Our framework consists of two key components: (1) a physical condition generator that transforms potentially invalid trajectory inputs into physically plausible conditions, and (2) a physics-enhanced video generator that produces high-fidelity multi-view driving videos under these conditions. To effectively train these components, we construct a large-scale, physics-rich heterogeneous dataset. Specifically, in addition to real-world driving videos, we generate diverse challenging driving scenarios using the CARLA simulator, from which we derive supervision signals that guide the model to learn physically grounded dynamics under extreme conditions. This challenging-trajectory learning strategy enables trajectory correction and promotes physically consistent video generation. Extensive experiments demonstrate that PhyGenesis consistently outperforms state-of-the-art methods, especially on challenging trajectories. Our project page is available at: https://wm-research.github.io/PhyGenesis/.
Problem

Research questions and friction points this paper is trying to address.

physically consistent
driving video generation
challenging trajectories
world models
autonomous driving simulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

world model
physical consistency
driving video generation
trajectory correction
physics-rich dataset
🔎 Similar Papers
No similar papers found.
J
Jiawei Zhou
Zhejiang University
Zhenxin Zhu
Zhenxin Zhu
Xiaomi AD
AIGCNeRF
L
Lingyi Du
Zhejiang University
L
Linye Lyu
The Hong Kong Polytechnic University
Lijun Zhou
Lijun Zhou
Xiaomi Corporation
Z
Zhanqian Wu
Xiaomi EV
H
Hongcheng Luo
Xiaomi EV
Zhuotao Tian
Zhuotao Tian
Professor, Harbin Institute of Technology (Shenzhen)
Vision-language ModelMulti-modal PerceptionComputer Vision
Bing Wang
Bing Wang
Xiaomi EV
Computer VisionPattern RecognitionMachine Learning
G
Guang Chen
Xiaomi EV
H
Hangjun Ye
Xiaomi EV
Haiyang Sun
Haiyang Sun
Xiaomi EV
World ModelAutonomous Driving3D Vision
Y
Yu Li
Zhejiang University