Visual Re-Ranking with Non-Visual Side Information

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing visual place recognition (VPR) re-ranking methods rely solely on initial image descriptors, failing to exploit non-visual auxiliary information—such as WiFi fingerprints, Bluetooth signals, and camera poses—thus limiting retrieval performance. To address this, we propose a graph neural network (GNN)-based re-ranking framework that fuses heterogeneous side information from multiple sources. We introduce the Generalized Contextual Similarity Aggregation (GCSA) framework, which encodes multimodal contextual cues via a shared affinity vector, enabling unified representation learning across modalities. Coupled with a GNN architecture, our method jointly models cross-modal similarities and optimizes visual, geometric, and signal modalities in an end-to-end manner. Evaluated on large-scale indoor and outdoor benchmarks, our approach significantly improves both retrieval accuracy and downstream visual localization precision. Experimental results demonstrate substantial gains from integrating non-visual cues, thereby advancing beyond conventional vision-only re-ranking paradigms.

Technology Category

Application Category

📝 Abstract
The standard approach for visual place recognition is to use global image descriptors to retrieve the most similar database images for a given query image. The results can then be further improved with re-ranking methods that re-order the top scoring images. However, existing methods focus on re-ranking based on the same image descriptors that were used for the initial retrieval, which we argue provides limited additional signal. In this work we propose Generalized Contextual Similarity Aggregation (GCSA), which is a graph neural network-based re-ranking method that, in addition to the visual descriptors, can leverage other types of available side information. This can for example be other sensor data (such as signal strength of nearby WiFi or BlueTooth endpoints) or geometric properties such as camera poses for database images. In many applications this information is already present or can be acquired with low effort. Our architecture leverages the concept of affinity vectors to allow for a shared encoding of the heterogeneous multi-modal input. Two large-scale datasets, covering both outdoor and indoor localization scenarios, are utilized for training and evaluation. In experiments we show significant improvement not only on image retrieval metrics, but also for the downstream visual localization task.
Problem

Research questions and friction points this paper is trying to address.

Improving visual place recognition with multi-modal re-ranking
Leveraging non-visual data for enhanced image retrieval accuracy
Integrating sensor and geometric data into visual re-ranking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph neural network-based re-ranking method
Leverages non-visual side information
Uses affinity vectors for multi-modal input
🔎 Similar Papers
No similar papers found.
G
Gustav Hanning
Lund University
G
Gabrielle Flood
Lund University
Viktor Larsson
Viktor Larsson
Assistant Professor at Lund University
Computer VisionOptimizationMachine Learning