UniPR-3D: Towards Universal Visual Place Recognition with Visual Geometry Grounded Transformer

📅 2025-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing visual place recognition (VPR) methods predominantly rely on single-image retrieval or exhibit limited generalization under multi-view settings. To address this, we propose the first unified VPR framework supporting variable-length multi-view inputs. Our method introduces three key innovations: (1) a geometry-aware multi-view fusion architecture built upon a novel Visual Geometry-Guided Transformer (VGGT) backbone; (2) a dual-stream 2D/3D feature aggregation module enabling cross-modal geometric alignment; and (3) a single/multi-frame adaptive sequential retrieval strategy integrating 3D voxel encoding with variable-length sequence modeling. The framework is fine-tuned via contrastive learning. Extensive experiments demonstrate consistent and significant improvements over state-of-the-art single- and multi-view VPR methods across multiple benchmarks. Ablation studies confirm that geometry-grounded tokens are critical for achieving robust, cross-environment VPR performance.

Technology Category

Application Category

📝 Abstract
Visual Place Recognition (VPR) has been traditionally formulated as a single-image retrieval task. Using multiple views offers clear advantages, yet this setting remains relatively underexplored and existing methods often struggle to generalize across diverse environments. In this work we introduce UniPR-3D, the first VPR architecture that effectively integrates information from multiple views. UniPR-3D builds on a VGGT backbone capable of encoding multi-view 3D representations, which we adapt by designing feature aggregators and fine-tune for the place recognition task. To construct our descriptor, we jointly leverage the 3D tokens and intermediate 2D tokens produced by VGGT. Based on their distinct characteristics, we design dedicated aggregation modules for 2D and 3D features, allowing our descriptor to capture fine-grained texture cues while also reasoning across viewpoints. To further enhance generalization, we incorporate both single- and multi-frame aggregation schemes, along with a variable-length sequence retrieval strategy. Our experiments show that UniPR-3D sets a new state of the art, outperforming both single- and multi-view baselines and highlighting the effectiveness of geometry-grounded tokens for VPR. Our code and models will be made publicly available on Github https://github.com/dtc111111/UniPR-3D.
Problem

Research questions and friction points this paper is trying to address.

Develops a multi-view visual place recognition architecture
Integrates 2D and 3D features for better generalization
Enhances retrieval with geometry-grounded tokens and aggregation strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-view 3D representation encoding with VGGT backbone
Joint aggregation of 2D and 3D tokens for fine-grained cues
Variable-length sequence retrieval with multi-frame aggregation
🔎 Similar Papers
No similar papers found.