🤖 AI Summary
This work addresses the limitations of existing driving video generation methods, which often produce physically inconsistent outputs and visual artifacts when handling challenging or counterfactual trajectories. To overcome these issues, we propose PhyGenesis, a world model featuring a physics-conditioned trajectory corrector that rectifies invalid trajectories and a physics-enhanced video generator that synthesizes high-fidelity, multi-view driving videos. We construct a physically rich, heterogeneous dataset by combining real-world data with CARLA simulations and introduce a challenging trajectory learning strategy to improve model generalization. Experimental results demonstrate that PhyGenesis significantly outperforms current approaches under complex trajectories, generating videos that exhibit both high visual fidelity and strong physical consistency.
📝 Abstract
Video generation models have shown strong potential as world models for autonomous driving simulation. However, existing approaches are primarily trained on real-world driving datasets, which mostly contain natural and safe driving scenarios. As a result, current models often fail when conditioned on challenging or counterfactual trajectories-such as imperfect trajectories generated by simulators or planning systems-producing videos with severe physical inconsistencies and artifacts. To address this limitation, we propose PhyGenesis, a world model designed to generate driving videos with high visual fidelity and strong physical consistency. Our framework consists of two key components: (1) a physical condition generator that transforms potentially invalid trajectory inputs into physically plausible conditions, and (2) a physics-enhanced video generator that produces high-fidelity multi-view driving videos under these conditions. To effectively train these components, we construct a large-scale, physics-rich heterogeneous dataset. Specifically, in addition to real-world driving videos, we generate diverse challenging driving scenarios using the CARLA simulator, from which we derive supervision signals that guide the model to learn physically grounded dynamics under extreme conditions. This challenging-trajectory learning strategy enables trajectory correction and promotes physically consistent video generation. Extensive experiments demonstrate that PhyGenesis consistently outperforms state-of-the-art methods, especially on challenging trajectories. Our project page is available at: https://wm-research.github.io/PhyGenesis/.