🤖 AI Summary
This work addresses the challenge of achieving reliable and efficient planning in complex driving scenarios by proposing a unified framework that jointly models geometric, appearance, and dynamic information. The authors introduce a multifaceted representation learning approach that constructs an implicit world representation integrating both structural and dynamic awareness, formalized as a variant of the variational autoencoder to provide a theoretical foundation for a unified driving world model. The method recovers scene geometry and texture through joint trajectory reconstruction and leverages a conditional diffusion Transformer to collaboratively generate future scene evolution in latent space. Experiments demonstrate that the proposed representation achieves strong performance across trajectory planning, 4D reconstruction, and generation tasks, validating its effectiveness as a foundational component for unified driving intelligence.
📝 Abstract
Achieving reliable and efficient planning in complex driving environments requires a model that can reason over the scene's geometry, appearance, and dynamics. We present UniDWM, a unified driving world model that advances autonomous driving through multifaceted representation learning. UniDWM constructs a structure- and dynamic-aware latent world representation that serves as a physically grounded state space, enabling consistent reasoning across perception, prediction, and planning. Specifically, a joint reconstruction pathway learns to recover the scene's structure, including geometry and visual texture, while a collaborative generation framework leverages a conditional diffusion transformer to forecast future world evolution within the latent space. Furthermore, we show that our UniDWM can be deemed as a variation of VAE, which provides theoretical guidance for the multifaceted representation learning. Extensive experiments demonstrate the effectiveness of UniDWM in trajectory planning, 4D reconstruction and generation, highlighting the potential of multifaceted world representations as a foundation for unified driving intelligence. The code will be publicly available at https://github.com/Say2L/UniDWM.