🤖 AI Summary
This survey addresses Compositional Image Retrieval (CIR), an emerging multimodal retrieval task. We systematically construct the first fine-grained taxonomy, unifying supervised and zero-shot CIR approaches while extending coverage to related paradigms—including attribute-driven and conversational CIR. Drawing on a comprehensive analysis of over 120 papers from top-tier venues (CVPR, SIGIR, TOIS, etc.), we review key techniques: deep multimodal representation alignment, cross-modal retrieval modeling, and zero-shot generalization. We comprehensively catalog mainstream benchmarks (e.g., FashionIQ, CIRR) and their evaluation results, delineating current performance boundaries. Our core contribution is the first structured, principled CIR survey framework—explicitly identifying open challenges and future research directions. This work serves as an authoritative reference and methodological foundation for advancing CIR research.
📝 Abstract
Composed Image Retrieval (CIR) is an emerging yet challenging task that allows users to search for target images using a multimodal query, comprising a reference image and a modification text specifying the user's desired changes to the reference image. Given its significant academic and practical value, CIR has become a rapidly growing area of interest in the computer vision and machine learning communities, particularly with the advances in deep learning. To the best of our knowledge, there is currently no comprehensive review of CIR to provide a timely overview of this field. Therefore, we synthesize insights from over 120 publications in top conferences and journals, including ACM TOIS, SIGIR, and CVPR In particular, we systematically categorize existing supervised CIR and zero-shot CIR models using a fine-grained taxonomy. For a comprehensive review, we also briefly discuss approaches for tasks closely related to CIR, such as attribute-based CIR and dialog-based CIR. Additionally, we summarize benchmark datasets for evaluation and analyze existing supervised and zero-shot CIR methods by comparing experimental results across multiple datasets. Furthermore, we present promising future directions in this field, offering practical insights for researchers interested in further exploration.