Seeing the Future, Perceiving the Future: A Unified Driving World Model for Future Generation and Perception

📅 2025-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the disconnection between future scene generation and geometric perception in autonomous driving, this paper proposes UniFuture—a unified world model that jointly generates and perceives RGB images and depth maps for the first time. Methodologically, UniFuture introduces a dual latent space sharing mechanism and a multi-scale latent feature interaction module; it employs a variational autoencoder to establish bidirectional mapping between RGB and depth modalities, integrates cross-attention to enable mutual optimization of appearance and geometry, and adopts a joint reconstruction loss for end-to-end training. Evaluated on nuScenes, UniFuture significantly outperforms task-specific models: it reduces future video divergence (FVD) by 18.3% and depth estimation error by 12.7%. Crucially, it synthesizes spatiotemporally consistent future RGB-depth pairs from a single input frame—establishing a novel paradigm for holistic driving scene modeling.

Technology Category

Application Category

📝 Abstract
We present UniFuture, a simple yet effective driving world model that seamlessly integrates future scene generation and perception within a single framework. Unlike existing models focusing solely on pixel-level future prediction or geometric reasoning, our approach jointly models future appearance (i.e., RGB image) and geometry (i.e., depth), ensuring coherent predictions. Specifically, during the training, we first introduce a Dual-Latent Sharing scheme, which transfers image and depth sequence in a shared latent space, allowing both modalities to benefit from shared feature learning. Additionally, we propose a Multi-scale Latent Interaction mechanism, which facilitates bidirectional refinement between image and depth features at multiple spatial scales, effectively enhancing geometry consistency and perceptual alignment. During testing, our UniFuture can easily predict high-consistency future image-depth pairs by only using the current image as input. Extensive experiments on the nuScenes dataset demonstrate that UniFuture outperforms specialized models on future generation and perception tasks, highlighting the advantages of a unified, structurally-aware world model. The project page is at https://github.com/dk-liang/UniFuture.
Problem

Research questions and friction points this paper is trying to address.

Unified model for future scene generation and perception.
Joint modeling of future appearance and geometry for coherent predictions.
Enhanced geometry consistency and perceptual alignment through multi-scale interaction.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified model for future scene generation and perception
Dual-Latent Sharing scheme for shared feature learning
Multi-scale Latent Interaction for geometry consistency
🔎 Similar Papers
No similar papers found.
Dingkang Liang
Dingkang Liang
Huazhong University of Science and Technology
Embodied AIWorld ModelAutonomous DrivingCrowd Counting
Dingyuan Zhang
Dingyuan Zhang
Huazhong University of Science and Technology, China
X
Xin Zhou
Huazhong University of Science and Technology, China
Sifan Tu
Sifan Tu
HUST
3D VisionAutonomous driving
T
Tianrui Feng
Huazhong University of Science and Technology, China
Xiaofan Li
Xiaofan Li
East China Normal University
Computer Vision
Y
Yumeng Zhang
Baidu Inc., China
M
Mingyang Du
Huazhong University of Science and Technology, China
X
Xiao Tan
Baidu Inc., China
Xiang Bai
Xiang Bai
Huazhong University of Science and Technology (HUST)
Computer VisionOCR