VGD: Visual Geometry Gaussian Splatting for Feed-Forward Surround-view Driving Reconstruction

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Ring-view autonomous driving scene reconstruction faces the core challenge of balancing geometric inconsistency and novel-view rendering quality—particularly in low-overlap regions. To address this, we propose VGD, an end-to-end feedforward framework that pioneers the integration of geometric prior distillation with a multi-scale Gaussian rendering head, explicitly modeling geometric constraints to enhance semantic fidelity. VGD employs a lightweight VGGT variant to extract geometric priors, introduces a differentiable Gaussian head for predicting rendering parameters, and jointly optimizes geometry and semantics via multi-scale feature consistency supervision. Evaluated on nuScenes, VGD significantly outperforms state-of-the-art methods, achieving superior performance in PSNR, LPIPS, and visual fidelity—demonstrating its high-fidelity, scalable reconstruction capability.

Technology Category

Application Category

📝 Abstract
Feed-forward surround-view autonomous driving scene reconstruction offers fast, generalizable inference ability, which faces the core challenge of ensuring generalization while elevating novel view quality. Due to the surround-view with minimal overlap regions, existing methods typically fail to ensure geometric consistency and reconstruction quality for novel views. To tackle this tension, we claim that geometric information must be learned explicitly, and the resulting features should be leveraged to guide the elevating of semantic quality in novel views. In this paper, we introduce extbf{Visual Gaussian Driving (VGD)}, a novel feed-forward end-to-end learning framework designed to address this challenge. To achieve generalizable geometric estimation, we design a lightweight variant of the VGGT architecture to efficiently distill its geometric priors from the pre-trained VGGT to the geometry branch. Furthermore, we design a Gaussian Head that fuses multi-scale geometry tokens to predict Gaussian parameters for novel view rendering, which shares the same patch backbone as the geometry branch. Finally, we integrate multi-scale features from both geometry and Gaussian head branches to jointly supervise a semantic refinement model, optimizing rendering quality through feature-consistent learning. Experiments on nuScenes demonstrate that our approach significantly outperforms state-of-the-art methods in both objective metrics and subjective quality under various settings, which validates VGD's scalability and high-fidelity surround-view reconstruction.
Problem

Research questions and friction points this paper is trying to address.

Ensuring geometric consistency in surround-view autonomous driving reconstruction
Improving novel view rendering quality with minimal overlap regions
Developing feed-forward framework for generalizable scene reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight VGGT variant distills geometric priors
Gaussian Head fuses geometry tokens for rendering
Multi-scale features supervise semantic refinement model
🔎 Similar Papers
No similar papers found.
J
Junhong Lin
Guangdong Provincial Key Laboratory of Ultra High Definition Immersive Media Technology, School of Electronic and Computer Engineering, Peking University, Shenzhen 518055, China
K
Kangli Wang
Guangdong Provincial Key Laboratory of Ultra High Definition Immersive Media Technology, School of Electronic and Computer Engineering, Peking University, Shenzhen 518055, China
Shunzhou Wang
Shunzhou Wang
HENU | PKUSZ | BIT
Image Super-ResolutionDepth Estimation3DGS
S
Songlin Fan
Guangdong Provincial Key Laboratory of Ultra High Definition Immersive Media Technology, School of Electronic and Computer Engineering, Peking University, Shenzhen 518055, China
Ge Li
Ge Li
Full Professor of Computer Science, Peking University
Program AnalysisProgram GenerationDeep Learning
W
Wei Gao
Guangdong Provincial Key Laboratory of Ultra High Definition Immersive Media Technology, School of Electronic and Computer Engineering, Peking University, Shenzhen 518055, China, and Peng Cheng Laboratory, Shenzhen 518066, China