If At First You Don't Succeed: Test Time Re-ranking for Zero-shot, Cross-domain Retrieval

📅 2023-03-30
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the visual feature mismatch yet semantic relevance challenge in zero-shot cross-domain image retrieval. We propose a test-time iterative, clustering-free re-ranking method that performs unsupervised, clustering-free feature propagation solely based on semantic similarity among gallery images—requiring no target-domain annotations, domain-specific priors (e.g., sketches), or task-specific architectural design, and compatible with arbitrary pre-trained backbones (e.g., ViT). Our key contribution is the first end-to-end, clustering-free, unsupervised re-ranking mechanism operating entirely at test time. Evaluated on sketch-based retrieval (Sketchy, TU-Berlin, QuickDraw), our method achieves state-of-the-art performance. On Office-Home cross-domain retrieval tasks—including cartoon→photo and art→product—it delivers significant improvements, demonstrating strong generalizability and practical utility across diverse domain shifts.
📝 Abstract
In this paper, we introduce a novel method for zero-shot, cross-domain image retrieval. Our key contribution is a test-time Iterative Cluster-free Re-ranking process that leverages gallery-gallery feature information to establish semantic links between query and gallery images. This enables the retrieval of relevant images even when they do not exhibit similar visual features but share underlying semantic concepts. This can be combined with any pre-existing cross-domain feature extraction backbone to improve retrieval performance. However, when combined with a carefully chosen Vision Transformer backbone and combination of zero-shot retrieval losses, our approach yields state-of-the-art results on the Sketchy, TU-Berlin and QuickDraw sketch-based retrieval benchmarks. We show that our re-ranking also improves performance with other backbones and outperforms other re-ranking methods applied with our backbone. Importantly, unlike many previous methods, none of the components in our approach are engineered specifically towards the sketch-based image retrieval task - it can be generally applied to any cross-domain, zero-shot retrieval task. We therefore also present new results on zero-shot cartoon-to-photo and art-to-product retrieval using the Office-Home dataset. Project page: finlay-hudson.github.io/icfrr, code available at: github.com/finlay-hudson/ICFRR
Problem

Research questions and friction points this paper is trying to address.

Improves zero-shot cross-domain image retrieval accuracy
Leverages semantic links between query and gallery images
Applicable to various domains beyond sketch-based retrieval
Innovation

Methods, ideas, or system contributions that make the work stand out.

Test-time Iterative Cluster-free Re-ranking process
Leverages gallery-gallery feature information
Combines Vision Transformer with zero-shot losses
🔎 Similar Papers
No similar papers found.
F
Finlay G. C. Hudson
Department of Computer Science, University of York, York, United Kingdom
William A. P. Smith
William A. P. Smith
Professor, Department of Computer Science, University of York
Computer VisionComputer GraphicsMachine Learning