🤖 AI Summary
Existing image–text retrieval (ITR) benchmarks rely on coarse-grained annotations, failing to reflect model performance and robustness under realistic multi-granularity query scenarios. Method: We systematically identify the root cause of this evaluation bias and introduce MS-COCO-FG and Flickr30k-FG—the first fine-grained ITR benchmarks—along with a comprehensive query perturbation taxonomy covering semantic, syntactic, and visual alignment dimensions. Under zero-shot cross-modal retrieval, we conduct a consistent evaluation of four state-of-the-art vision-language models. Results: All models exhibit significant performance gains and enhanced robustness on fine-grained benchmarks; moreover, their sensitivity to perturbations is highly consistent across models, indicating that observed fragility stems from benchmark limitations—not intrinsic model deficiencies. This work is the first to expose a fundamental limitation in ITR evaluation through the lens of data granularity, establishing a new paradigm for developing more reliable multimodal evaluation frameworks.
📝 Abstract
We examine the brittleness of the image-text retrieval (ITR) evaluation pipeline with a focus on concept granularity. We start by analyzing two common benchmarks, MS-COCO and Flickr30k, and compare them with augmented, fine-grained versions, MS-COCO-FG and Flickr30k-FG, given a specified set of linguistic features capturing concept granularity. Flickr30k-FG and MS COCO-FG consistently give rise to higher scores across all the selected features. To further our understanding of the impact of granularity we consider a novel taxonomy of query perturbations. We apply these perturbations to the selected datasets. We evaluate four diverse state-of-the-art Vision-Language models on both the standard and fine-grained datasets under zero-shot conditions, with and without the applied perturbations. The results demonstrate that although perturbations generally degrade model performance, the fine-grained datasets exhibit a smaller performance drop than their standard counterparts. The relative performance drop across all setups is consistent across all models and datasets, indicating that the issue lies within the benchmarks themselves. We conclude by providing an agenda for improving ITR evaluation pipelines.