🤖 AI Summary
Cross-view geolocalization often suffers from low Top-1 accuracy under high recall due to the difficulty in identifying the optimal match among candidates. This work proposes a two-stage framework: first generating retrieval candidates using a state-of-the-art model, then re-ranking them with a zero-shot vision-language model. The study systematically compares pointwise scoring and pairwise comparison strategies, revealing for the first time that such models perform poorly in absolute relevance scoring but excel at fine-grained relative visual judgments. Leveraging this insight, the authors introduce a novel re-ranking paradigm based on pairwise comparisons. Experiments on the VIGOR dataset demonstrate that this approach significantly improves Top-1 accuracy, whereas pointwise methods yield limited gains or even degrade performance.
📝 Abstract
Cross-view geolocalization (CVGL) systems, while effective at retrieving a list of relevant candidates (high Recall@k), often fail to identify the single best match (low Top-1 accuracy). This work investigates the use of zero-shot Vision-Language Models (VLMs) as rerankers to address this gap. We propose a two-stage framework: state-of-the-art (SOTA) retrieval followed by VLM reranking. We systematically compare two strategies: (1) Pointwise (scoring candidates individually) and (2) Pairwise (comparing candidates relatively). Experiments on the VIGOR dataset show a clear divergence: all pointwise methods cause a catastrophic drop in performance or no change at all. In contrast, a pairwise comparison strategy using LLaVA improves Top-1 accuracy over the strong retrieval baseline. Our analysis concludes that, these VLMs are poorly calibrated for absolute relevance scoring but are effective at fine-grained relative visual judgment, making pairwise reranking a promising direction for enhancing CVGL precision.