DriveTok: 3D Driving Scene Tokenization for Unified Multi-View Reconstruction and Understanding

πŸ“… 2026-03-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitations of existing image tokenizers, which are primarily designed for monocular 2D scenes and suffer from inefficiency and view inconsistency in high-resolution, multi-view driving environments. To overcome these challenges, we propose DriveTokβ€”a novel and efficient tokenization method tailored for 3D driving scenes. DriveTok leverages 3D deformable cross-attention to fuse semantic features extracted by vision foundation models into unified 3D scene tokens, and integrates a multi-view Transformer to jointly reconstruct and understand the scene across multiple tasks. Our approach achieves, for the first time, a unified representation of semantic, geometric, and textural information, significantly enhancing spatial perception consistency and computational efficiency. Evaluated on the nuScenes dataset, DriveTok demonstrates superior performance across diverse tasks, including image reconstruction, semantic segmentation, depth estimation, and 3D semantic occupancy prediction.

Technology Category

Application Category

πŸ“ Abstract
With the growing adoption of vision-language-action models and world models in autonomous driving systems, scalable image tokenization becomes crucial as the interface for the visual modality. However, most existing tokenizers are designed for monocular and 2D scenes, leading to inefficiency and inter-view inconsistency when applied to high-resolution multi-view driving scenes. To address this, we propose DriveTok, an efficient 3D driving scene tokenizer for unified multi-view reconstruction and understanding. DriveTok first obtains semantically rich visual features from vision foundation models and then transforms them into the scene tokens with 3D deformable cross-attention. For decoding, we employ a multi-view transformer to reconstruct multi-view features from the scene tokens and use multiple heads to obtain RGB, depth, and semantic reconstructions. We also add a 3D head directly on the scene tokens for 3D semantic occupancy prediction for better spatial awareness. With the multiple training objectives, DriveTok learns unified scene tokens that integrate semantic, geometric, and textural information for efficient multi-view tokenization. Extensive experiments on the widely used nuScenes dataset demonstrate that the scene tokens from DriveTok perform well on image reconstruction, semantic segmentation, depth prediction, and 3D occupancy prediction tasks.
Problem

Research questions and friction points this paper is trying to address.

multi-view
3D driving scene
tokenization
inter-view inconsistency
scalable visual interface
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D scene tokenization
multi-view reconstruction
vision foundation models
3D semantic occupancy
deformable cross-attention
D
Dong Zhuo
Tsinghua University
Wenzhao Zheng
Wenzhao Zheng
EECS, University of California, Berkeley
Large ModelsEmbodied AgentsAutonomous Driving
S
Sicheng Zuo
Tsinghua University
S
Siming Yan
Yinwang Intelligent Technology Co. Ltd.
L
Lu Hou
Yinwang Intelligent Technology Co. Ltd.
Jie Zhou
Jie Zhou
Tsinghua University
Graph Neural NetworksNatural Language Processing
J
Jiwen Lu
Tsinghua University