🤖 AI Summary
Existing image retrieval methods based on the “global-feature initial search + local-feature re-ranking” paradigm suffer from high computational cost in local matching and poor scalability to large candidate sets. To address these limitations, this paper proposes a novel “local-to-global” retrieval paradigm. First, efficient local feature search enables fine-grained and scalable initial matching. Then, global features are dynamically generated from local similarities, and multidimensional scaling (MDS) is employed to construct an embedding space that preserves local structural relationships, enabling online global re-ranking. This approach seamlessly integrates the discriminative power of local matching with the efficiency of global re-ranking. Evaluated on the Revisited Oxford and Paris benchmarks, the method achieves state-of-the-art performance, significantly improving both accuracy and efficiency for large-scale image retrieval.
📝 Abstract
The dominant paradigm in image retrieval systems today is to search large databases using global image features, and re-rank those initial results with local image feature matching techniques. This design, dubbed global-to-local, stems from the computational cost of local matching approaches, which can only be afforded for a small number of retrieved images. However, emerging efficient local feature search approaches have opened up new possibilities, in particular enabling detailed retrieval at large scale, to find partial matches which are often missed by global feature search. In parallel, global feature-based re-ranking has shown promising results with high computational efficiency. In this work, we leverage these building blocks to introduce a local-to-global retrieval paradigm, where efficient local feature search meets effective global feature re-ranking. Critically, we propose a re-ranking method where global features are computed on-the-fly, based on the local feature retrieval similarities. Such re-ranking-only global features leverage multidimensional scaling techniques to create embeddings which respect the local similarities obtained during search, enabling a significant re-ranking boost. Experimentally, we demonstrate solid retrieval performance, setting new state-of-the-art results on the Revisited Oxford and Paris datasets.