UniDWM: Towards a Unified Driving World Model via Multifaceted Representation Learning

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of achieving reliable and efficient planning in complex driving scenarios by proposing a unified framework that jointly models geometric, appearance, and dynamic information. The authors introduce a multifaceted representation learning approach that constructs an implicit world representation integrating both structural and dynamic awareness, formalized as a variant of the variational autoencoder to provide a theoretical foundation for a unified driving world model. The method recovers scene geometry and texture through joint trajectory reconstruction and leverages a conditional diffusion Transformer to collaboratively generate future scene evolution in latent space. Experiments demonstrate that the proposed representation achieves strong performance across trajectory planning, 4D reconstruction, and generation tasks, validating its effectiveness as a foundational component for unified driving intelligence.

Technology Category

Application Category

📝 Abstract
Achieving reliable and efficient planning in complex driving environments requires a model that can reason over the scene's geometry, appearance, and dynamics. We present UniDWM, a unified driving world model that advances autonomous driving through multifaceted representation learning. UniDWM constructs a structure- and dynamic-aware latent world representation that serves as a physically grounded state space, enabling consistent reasoning across perception, prediction, and planning. Specifically, a joint reconstruction pathway learns to recover the scene's structure, including geometry and visual texture, while a collaborative generation framework leverages a conditional diffusion transformer to forecast future world evolution within the latent space. Furthermore, we show that our UniDWM can be deemed as a variation of VAE, which provides theoretical guidance for the multifaceted representation learning. Extensive experiments demonstrate the effectiveness of UniDWM in trajectory planning, 4D reconstruction and generation, highlighting the potential of multifaceted world representations as a foundation for unified driving intelligence. The code will be publicly available at https://github.com/Say2L/UniDWM.
Problem

Research questions and friction points this paper is trying to address.

autonomous driving
world model
representation learning
scene understanding
trajectory planning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified Driving World Model
Multifaceted Representation Learning
Conditional Diffusion Transformer
Latent World Representation
Structure- and Dynamic-aware Modeling
🔎 Similar Papers
No similar papers found.
Shuai Liu
Shuai Liu
Sun Yat-sen University
Computer VisionAutonomous DrivingXAI
S
Siheng Ren
School of Computer Science and Engineering, Sun Yat-sen University
X
Xiaoyao Zhu
School of Computer Science and Engineering, Sun Yat-sen University
Quanmin Liang
Quanmin Liang
Sun Yat-Sen University
MultimodalEmbodied AI
Z
Zefeng Li
School of Computer Science and Engineering, Sun Yat-sen University
Q
Qiang Li
XPeng Motors Technology Co Ltd.
X
Xin Hu
XPeng Motors Technology Co Ltd.
Kai Huang
Kai Huang
Sun Yat-sen University
Embedded Systems