FastVGGT: Training-Free Acceleration of Visual Geometry Transformer

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision Geometry Transformers (VGGTs) suffer from low inference efficiency and token collapse in attention maps when processing long image sequences for 3D vision tasks. Method: We propose a training-free token merging method featuring a dynamic, task-aware chunking strategy tailored for 3D reconstruction. By integrating structure-aware local–global token aggregation, it substantially reduces redundant computation and error accumulation. Contribution/Results: This work pioneers the application of token merging to 3D visual geometry modeling, balancing computational scalability with geometric fidelity. Evaluated on multiple 3D reconstruction benchmarks, our method achieves a 4× speedup over baseline VGGT on sequences of 1,000 images while preserving high-fidelity reconstruction quality—surpassing existing acceleration techniques in generalizability and stability under long-sequence regimes.

Technology Category

Application Category

📝 Abstract
Foundation models for 3D vision have recently demonstrated remarkable capabilities in 3D perception. However, scaling these models to long-sequence image inputs remains a significant challenge due to inference-time inefficiency. In this work, we present a detailed analysis of VGGT, a state-of-the-art feed-forward visual geometry model and identify its primary bottleneck. Visualization further reveals a token collapse phenomenon in the attention maps. Motivated by these findings, we explore the potential of token merging in the feed-forward visual geometry model. Owing to the unique architectural and task-specific properties of 3D models, directly applying existing merging techniques proves challenging. To this end, we propose FastVGGT, which, for the first time, leverages token merging in the 3D domain through a training-free mechanism for accelerating VGGT. we devise a unique token partitioning strategy tailored to 3D architectures and tasks, effectively eliminating redundant computation while preserving VGGT's powerful reconstruction capacity. Extensive experiments on multiple 3D geometry benchmarks validate the effectiveness of our approach. Notably, with 1000 input images, FastVGGT achieves a 4x speedup over VGGT while mitigating error accumulation in long-sequence scenarios. These findings underscore the potential of token merging as a principled solution for scalable 3D vision systems. Code is available at: https://mystorm16.github.io/fastvggt/.
Problem

Research questions and friction points this paper is trying to address.

Accelerating 3D visual geometry transformer inference
Addressing token collapse in attention maps
Reducing redundant computation in long-sequence inputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free token merging for 3D acceleration
Unique token partitioning strategy for 3D tasks
Preserves reconstruction capacity while eliminating redundancy
🔎 Similar Papers
2024-07-16European Conference on Computer VisionCitations: 1