🤖 AI Summary
This paper identifies a fundamental limitation of dense retrievers’ text encoders in fine-grained entity/event matching, termed the “embedding granularity dilemma”—the inherent difficulty in simultaneously preserving local salience and global semantic consistency within fixed-dimensional embeddings. To address this, we introduce CapRetrieval, the first Chinese benchmark for image-caption retrieval, featuring a novel caption-query generation strategy explicitly designed for fine-grained alignment. We propose a contrastive learning-based encoder fine-tuning method and establish a unified evaluation framework encompassing both zero-shot and supervised settings. Experiments demonstrate substantial improvements in fine-grained retrieval accuracy across diverse query types; critically, the granularity bottleneck is empirically validated as pervasive—consistent across encoders of varying scales and pretraining sources. All resources—including the benchmark dataset, source code, and trained models—are publicly released.
📝 Abstract
This work focuses on an observed limitation of text encoders: embeddings may not be able to recognize fine-grained entities or events within the semantics, resulting in failed dense retrieval on even simple cases. To examine such behaviors, we first introduce a new evaluation dataset in Chinese, named CapRetrieval, whose passages are image captions, and queries are phrases inquiring entities or events in various forms. Zero-shot evaluation suggests that encoders may fail on these fine-grained matching, regardless of training sources or model sizes. Aiming for enhancement, we proceed to finetune encoders with our proposed data generation strategies, which obtains the best performance on CapRetrieval. Within this process, we further identify an issue of granularity dilemma, a challenge for embeddings to express fine-grained salience while aligning with overall semantics. Our dataset, code and models in this work are publicly released at https://github.com/lxucs/CapRetrieval.