🤖 AI Summary
To address insufficient multi-view spatial information fusion and poor robustness of absolute pose estimation caused by conventional late fusion in visual relocalization, this paper proposes a geometry-aware early-fusion framework for multi-view spatial integration. Methodologically, it replaces late fusion with an early-fusion mechanism that directly models cross-view geometric constraints; introduces a pose tokenizer and a differentiable projection module to achieve unified pose representation across both structured and unstructured environments; and incorporates a sparse masked attention mechanism to significantly reduce computational overhead without sacrificing accuracy. Built upon the VGGT geometric encoder backbone and trained on 8 million image pairs under supervised learning, the framework achieves high-accuracy, real-time, and highly generalizable absolute pose estimation across multiple public benchmarks, demonstrating exceptional robustness to unseen scenes.
📝 Abstract
Visual localization has traditionally been formulated as a pair-wise pose regression problem. Existing approaches mainly estimate relative poses between two images and employ a late-fusion strategy to obtain absolute pose estimates. However, the late motion average is often insufficient for effectively integrating spatial information, and its accuracy degrades in complex environments. In this paper, we present the first visual localization framework that performs multi-view spatial integration through an early-fusion mechanism, enabling robust operation in both structured and unstructured environments. Our framework is built upon the VGGT backbone, which encodes multi-view 3D geometry, and we introduce a pose tokenizer and projection module to more effectively exploit spatial relationships from multiple database views. Furthermore, we propose a novel sparse mask attention strategy that reduces computational cost by avoiding the quadratic complexity of global attention, thereby enabling real-time performance at scale. Trained on approximately eight million posed image pairs, Reloc-VGGT demonstrates strong accuracy and remarkable generalization ability. Extensive experiments across diverse public datasets consistently validate the effectiveness and efficiency of our approach, delivering high-quality camera pose estimates in real time while maintaining robustness to unseen environments. Our code and models will be publicly released upon acceptance.https://github.com/dtc111111/Reloc-VGGT.