Few Shots Text to Image Retrieval: New Benchmarking Dataset and Optimization Methods

πŸ“… 2026-03-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing vision-language models exhibit limited generalization in compositional queries and out-of-distribution image–text retrieval, struggling to emulate human few-shot learning capabilities. This work introduces the first few-shot text-to-image retrieval (FSIR) task and presents FSIR-BD, the inaugural benchmark dataset tailored for evaluating compositional reasoning and out-of-distribution generalization. Furthermore, we propose two general and efficient optimization strategies that are compatible with any pretrained image encoder. These methods enhance retrieval performance by incorporating one or a few reference examples alongside approximate nearest neighbor search. Experimental results demonstrate that our approaches significantly outperform existing baselines on FSIR-BD in terms of mean average precision (mAP), offering an effective solution for few-shot compositional inference.
πŸ“ Abstract
Pre-trained vision-language models (VLMs) excel in multimodal tasks, commonly encoding images as embedding vectors for storage in databases and retrieval via approximate nearest neighbor search (ANNS). However, these models struggle with compositional queries and out-of-distribution (OOD) image-text pairs. Inspired by human cognition's ability to learn from minimal examples, we address this performance gap through few-shot learning approaches specifically designed for image retrieval. We introduce the Few-Shot Text-to-Image Retrieval (FSIR) task and its accompanying benchmark dataset, FSIR-BD - the first to explicitly target image retrieval by text accompanied by reference examples, focusing on the challenging compositional and OOD queries. The compositional part is divided to urban scenes and nature species, both in specific situations or with distinctive features. FSIR-BD contains 38,353 images and 303 queries, with 82% comprising the test corpus (averaging per query 37 positives, ground truth matches, and significant number of hard negatives) and 18% forming the few-shot reference corpus (FSR) of exemplar positive and hard negative images. Additionally, we propose two novel retrieval optimization methods leveraging single shot or few shot reference examples in the FSR to improve performance. Both methods are compatible with any pre-trained image encoder, making them applicable to existing large-scale environments. Our experiments demonstrate that: (1) FSIR-BD provides a challenging benchmark for image retrieval; and (2) our optimization methods outperform existing baselines as measured by mean Average Precision (mAP). Further research into FSIR optimization methods will help narrow the gap between machine and human-level understanding, particularly for compositional reasoning from limited examples.
Problem

Research questions and friction points this paper is trying to address.

few-shot learning
text-to-image retrieval
compositional queries
out-of-distribution
vision-language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Few-shot learning
Text-to-image retrieval
Compositional reasoning
Out-of-distribution generalization
Vision-language models
πŸ”Ž Similar Papers
No similar papers found.
Ofer Idan
Ofer Idan
Columbia University
Synthetic BiologyNanobiotechnologySystems Biology
V
Vladi Vexler
Huawei Tel-Aviv Research Center
G
Gil Lederman
Huawei Tel-Aviv Research Center
D
Dima Sivov
Huawei Tel-Aviv Research Center
A
Aviad Cohen Zada
Huawei Tel-Aviv Research Center
S
Shir Niego Komforti
Huawei Tel-Aviv Research Center