DVGT-2: Vision-Geometry-Action Model for Autonomous Driving at Scale

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing end-to-end autonomous driving approaches, which either rely on sparse perception or language priors and thus fail to provide the complete 3D environmental understanding required for robust decision-making, or employ computationally expensive dense geometric reconstruction that impedes real-time planning. To overcome this trade-off, the paper introduces the Vision-Geometry-Action (VGA) paradigm, which centers on dense 3D geometry as the core representation. It presents the Driving Visual Geometry Transformer (DVGT-2), a streaming architecture that leverages temporal causal attention, historical feature caching, and sliding-window inference to simultaneously achieve efficient dense reconstruction and trajectory planning from single-frame inputs. The method demonstrates strong performance on benchmarks such as nuScenes and NAVSIM and generalizes across varying camera configurations without fine-tuning.
📝 Abstract
End-to-end autonomous driving has evolved from the conventional paradigm based on sparse perception into vision-language-action (VLA) models, which focus on learning language descriptions as an auxiliary task to facilitate planning. In this paper, we propose an alternative Vision-Geometry-Action (VGA) paradigm that advocates dense 3D geometry as the critical cue for autonomous driving. As vehicles operate in a 3D world, we think dense 3D geometry provides the most comprehensive information for decision-making. However, most existing geometry reconstruction methods (e.g., DVGT) rely on computationally expensive batch processing of multi-frame inputs and cannot be applied to online planning. To address this, we introduce a streaming Driving Visual Geometry Transformer (DVGT-2), which processes inputs in an online manner and jointly outputs dense geometry and trajectory planning for the current frame. We employ temporal causal attention and cache historical features to support on-the-fly inference. To further enhance efficiency, we propose a sliding-window streaming strategy and use historical caches within a certain interval to avoid repetitive computations. Despite the faster speed, DVGT-2 achieves superior geometry reconstruction performance on various datasets. The same trained DVGT-2 can be directly applied to planning across diverse camera configurations without fine-tuning, including closed-loop NAVSIM and open-loop nuScenes benchmarks.
Problem

Research questions and friction points this paper is trying to address.

autonomous driving
dense 3D geometry
online planning
geometry reconstruction
real-time inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Geometry-Action
dense 3D geometry
streaming inference
temporal causal attention
online planning
🔎 Similar Papers
No similar papers found.