🤖 AI Summary
To address localization ambiguity in cross-city visual place recognition (VPR) caused by highly similar landmark appearances, this paper proposes a joint perception framework based on learnable contextual queries. Methodologically, we design a pixel-level contextual query module to explicitly model fine-grained visual associations between landmarks and surrounding environments (e.g., roads, vegetation); introduce a heatmap-based attention mechanism to generate spatially adaptive contextual responses; and define a query matching loss for end-to-end optimization. Our key contribution is the first integration of learnable query mechanisms into VPR contextual modeling, enabling collaborative landmark–environment representation. Experiments demonstrate that our method significantly outperforms state-of-the-art approaches across multiple standard benchmarks, particularly under challenging cross-city settings, strong illumination variations, and large viewpoint differences—achieving notable gains in both robustness and accuracy.
📝 Abstract
Visual Place Recognition (VPR) aims to estimate the location of the given query image within a database of geo-tagged images. To identify the exact location in an image, detecting landmarks is crucial. However, in some scenarios, such as urban environments, there are numerous landmarks, such as various modern buildings, and the landmarks in different cities often exhibit high visual similarity. Therefore, it is essential not only to leverage the landmarks but also to consider the contextual information surrounding them, such as whether there are trees, roads, or other features around the landmarks. We propose the Contextual Query VPR (CQVPR), which integrates contextual information with detailed pixel-level visual features. By leveraging a set of learnable contextual queries, our method automatically learns the high-level contexts with respect to landmarks and their surrounding areas. Heatmaps depicting regions that each query attends to serve as context-aware features, offering cues that could enhance the understanding of each scene. We further propose a query matching loss to supervise the extraction process of contextual queries. Extensive experiments on several datasets demonstrate that the proposed method outperforms other state-of-the-art methods, especially in challenging scenarios.