Assessing Brittleness of Image-Text Retrieval Benchmarks from Vision-Language Models Perspective

📅 2024-07-21
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image–text retrieval (ITR) benchmarks rely on coarse-grained annotations, failing to reflect model performance and robustness under realistic multi-granularity query scenarios. Method: We systematically identify the root cause of this evaluation bias and introduce MS-COCO-FG and Flickr30k-FG—the first fine-grained ITR benchmarks—along with a comprehensive query perturbation taxonomy covering semantic, syntactic, and visual alignment dimensions. Under zero-shot cross-modal retrieval, we conduct a consistent evaluation of four state-of-the-art vision-language models. Results: All models exhibit significant performance gains and enhanced robustness on fine-grained benchmarks; moreover, their sensitivity to perturbations is highly consistent across models, indicating that observed fragility stems from benchmark limitations—not intrinsic model deficiencies. This work is the first to expose a fundamental limitation in ITR evaluation through the lens of data granularity, establishing a new paradigm for developing more reliable multimodal evaluation frameworks.

Technology Category

Application Category

📝 Abstract
We examine the brittleness of the image-text retrieval (ITR) evaluation pipeline with a focus on concept granularity. We start by analyzing two common benchmarks, MS-COCO and Flickr30k, and compare them with augmented, fine-grained versions, MS-COCO-FG and Flickr30k-FG, given a specified set of linguistic features capturing concept granularity. Flickr30k-FG and MS COCO-FG consistently give rise to higher scores across all the selected features. To further our understanding of the impact of granularity we consider a novel taxonomy of query perturbations. We apply these perturbations to the selected datasets. We evaluate four diverse state-of-the-art Vision-Language models on both the standard and fine-grained datasets under zero-shot conditions, with and without the applied perturbations. The results demonstrate that although perturbations generally degrade model performance, the fine-grained datasets exhibit a smaller performance drop than their standard counterparts. The relative performance drop across all setups is consistent across all models and datasets, indicating that the issue lies within the benchmarks themselves. We conclude by providing an agenda for improving ITR evaluation pipelines.
Problem

Research questions and friction points this paper is trying to address.

Examining how dataset granularity affects retrieval performance in ITR systems
Assessing robustness of VLMs under query perturbations in image-text retrieval
Evaluating impact of caption granularity on model sensitivity to perturbations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-grained annotations enhance retrieval performance
Perturbations reveal nuanced model behaviors
Word order critically affects model sensitivity
🔎 Similar Papers
No similar papers found.
Mariya Hendriksen
Mariya Hendriksen
University of Oxford
Artificial IntelligenceVision and LanguageAI for Neuroscience
S
Shuo Zhang
Bloomberg, London, UK
R
R. Reinanda
Bloomberg, London, UK
Mohamed Yahya
Mohamed Yahya
Bloomberg, London, UK
E
E. Meij
Bloomberg, London, UK
M
M. D. Rijke
University of Amsterdam, Amsterdam, The Netherlands