Rethinking Composed Image Retrieval Evaluation: A Fine-Grained Benchmark from Image Editing

📅 2026-01-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing compositional image retrieval (CIR) benchmarks suffer from limited query categories, failing to capture the fine-grained diversity required in real-world scenarios. To address this gap, this work proposes EDIR—a fine-grained CIR benchmark constructed using image editing techniques, comprising 5,000 high-quality queries across 5 major categories and 15 subcategories. EDIR is the first benchmark to enable precise control over both query content and modification types through controllable editing synthesis. Comprehensive evaluation of 13 state-of-the-art multimodal models reveals significant performance disparities across subcategories, even among top-performing methods. Further in-domain training experiments underscore EDIR’s challenge and effectiveness, exposing critical limitations in current models and existing benchmarks.

Technology Category

Application Category

📝 Abstract
Composed Image Retrieval (CIR) is a pivotal and complex task in multimodal understanding. Current CIR benchmarks typically feature limited query categories and fail to capture the diverse requirements of real-world scenarios. To bridge this evaluation gap, we leverage image editing to achieve precise control over modification types and content, enabling a pipeline for synthesizing queries across a broad spectrum of categories. Using this pipeline, we construct EDIR, a novel fine-grained CIR benchmark. EDIR encompasses 5,000 high-quality queries structured across five main categories and fifteen subcategories. Our comprehensive evaluation of 13 multimodal embedding models reveals a significant capability gap; even state-of-the-art models (e.g., RzenEmbed and GME) struggle to perform consistently across all subcategories, highlighting the rigorous nature of our benchmark. Through comparative analysis, we further uncover inherent limitations in existing benchmarks, such as modality biases and insufficient categorical coverage. Furthermore, an in-domain training experiment demonstrates the feasibility of our benchmark. This experiment clarifies the task challenges by distinguishing between categories that are solvable with targeted data and those that expose intrinsic limitations of current model architectures.
Problem

Research questions and friction points this paper is trying to address.

Composed Image Retrieval
evaluation benchmark
fine-grained
multimodal understanding
image editing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Composed Image Retrieval
fine-grained benchmark
image editing
multimodal evaluation
EDIR
🔎 Similar Papers
No similar papers found.