VGGT: Visual Geometry Grounded Transformer

📅 2025-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of jointly estimating full-stack 3D geometric attributes—including camera parameters, depth maps, point cloud maps, and 3D point trajectories—under single- and multi-view settings. We propose VGGT, the first end-to-end feed-forward Transformer model for unified multi-task 3D geometric perception. VGGT eliminates conventional stage-wise optimization and task-specific architectures by incorporating vision-geometry priors into multi-view feature interaction and performing explicit 3D coordinate regression, enabling fully differentiable training. It achieves state-of-the-art performance on four core tasks: camera pose estimation, multi-view depth prediction, dense reconstruction, and 3D point tracking. With inference time under one second, VGGT significantly outperforms post-processing–dependent methods. Moreover, its pre-trained backbone improves downstream performance on non-rigid point tracking and feed-forward novel-view synthesis.

Technology Category

Application Category

📝 Abstract
We present VGGT, a feed-forward neural network that directly infers all key 3D attributes of a scene, including camera parameters, point maps, depth maps, and 3D point tracks, from one, a few, or hundreds of its views. This approach is a step forward in 3D computer vision, where models have typically been constrained to and specialized for single tasks. It is also simple and efficient, reconstructing images in under one second, and still outperforming alternatives that require post-processing with visual geometry optimization techniques. The network achieves state-of-the-art results in multiple 3D tasks, including camera parameter estimation, multi-view depth estimation, dense point cloud reconstruction, and 3D point tracking. We also show that using pretrained VGGT as a feature backbone significantly enhances downstream tasks, such as non-rigid point tracking and feed-forward novel view synthesis. Code and models are publicly available at https://github.com/facebookresearch/vggt.
Problem

Research questions and friction points this paper is trying to address.

Infer 3D scene attributes from single or multiple views.
Achieve state-of-the-art in 3D tasks like depth estimation.
Enhance downstream tasks using pretrained VGGT as backbone.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Directly infers 3D scene attributes from views
Reconstructs images efficiently in under one second
Enhances downstream tasks with pretrained VGGT
🔎 Similar Papers
No similar papers found.