Reloc-VGGT: Visual Re-localization with Geometry Grounded Transformer

📅 2025-12-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient multi-view spatial information fusion and poor robustness of absolute pose estimation caused by conventional late fusion in visual relocalization, this paper proposes a geometry-aware early-fusion framework for multi-view spatial integration. Methodologically, it replaces late fusion with an early-fusion mechanism that directly models cross-view geometric constraints; introduces a pose tokenizer and a differentiable projection module to achieve unified pose representation across both structured and unstructured environments; and incorporates a sparse masked attention mechanism to significantly reduce computational overhead without sacrificing accuracy. Built upon the VGGT geometric encoder backbone and trained on 8 million image pairs under supervised learning, the framework achieves high-accuracy, real-time, and highly generalizable absolute pose estimation across multiple public benchmarks, demonstrating exceptional robustness to unseen scenes.

Technology Category

Application Category

📝 Abstract
Visual localization has traditionally been formulated as a pair-wise pose regression problem. Existing approaches mainly estimate relative poses between two images and employ a late-fusion strategy to obtain absolute pose estimates. However, the late motion average is often insufficient for effectively integrating spatial information, and its accuracy degrades in complex environments. In this paper, we present the first visual localization framework that performs multi-view spatial integration through an early-fusion mechanism, enabling robust operation in both structured and unstructured environments. Our framework is built upon the VGGT backbone, which encodes multi-view 3D geometry, and we introduce a pose tokenizer and projection module to more effectively exploit spatial relationships from multiple database views. Furthermore, we propose a novel sparse mask attention strategy that reduces computational cost by avoiding the quadratic complexity of global attention, thereby enabling real-time performance at scale. Trained on approximately eight million posed image pairs, Reloc-VGGT demonstrates strong accuracy and remarkable generalization ability. Extensive experiments across diverse public datasets consistently validate the effectiveness and efficiency of our approach, delivering high-quality camera pose estimates in real time while maintaining robustness to unseen environments. Our code and models will be publicly released upon acceptance.https://github.com/dtc111111/Reloc-VGGT.
Problem

Research questions and friction points this paper is trying to address.

Develops a multi-view visual localization framework with early-fusion for robust pose estimation
Introduces a geometry-aware transformer and sparse attention for efficient real-time performance
Enhances accuracy and generalization in structured and unstructured environments using spatial integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Early-fusion multi-view spatial integration for robust localization.
Pose tokenizer and projection module exploit spatial relationships effectively.
Sparse mask attention reduces computational cost for real-time performance.
🔎 Similar Papers
No similar papers found.
Tianchen Deng
Tianchen Deng
Shanghai Jiao Tong University
RoboticsComputer Vision
Wenhua Wu
Wenhua Wu
Shanghai Jiao Tong University
computer vision
K
Kunzhen Wu
Shanghai Jiao Tong University
Guangming Wang
Guangming Wang
University of Cambridge, ETH Zurich, and Shanghai Jiao Tong University
Robot VisionRobot ManipulationRoboticsComputer VisionAutonomous Driving
S
Siting Zhu
Shanghai Jiao Tong University
S
Shenghai Yuan
Nanyang Technological University
X
Xun Chen
Nanyang Technological University
G
Guole Shen
Shanghai Jiao Tong University
Z
Zhe Liu
Shanghai Jiao Tong University
H
Hesheng Wang
Shanghai Jiao Tong University