Reducing Task Discrepancy of Text Encoders for Zero-Shot Composed Image Retrieval

📅 2024-06-13
🏛️ arXiv.org
📈 Citations: 4
Influential: 2
📄 PDF
🤖 AI Summary
This work addresses the representation bias in CLIP’s text encoder for zero-shot compositional image retrieval (ZS-CIR), arising from a misalignment between its pretraining objective (image-text matching) and the downstream task (text-modified image → image matching). To this end, we propose Retrieval-Targeted Distillation (RTD), a lightweight text-only contrastive learning framework. RTD fine-tunes only the text encoder using minimal triplet supervision—query, target, and distractor texts—and introduces two key innovations: hard-negative batch sampling and adaptive feature concatenation. Crucially, RTD is fully compatible with standard projection-based CLIP ZS-CIR architectures and requires no image-encoder adaptation. Extensive experiments on benchmarks including CIRR and FashionIQ, across multiple CLIP backbones, demonstrate consistent and significant improvements in retrieval accuracy. The method exhibits strong efficiency, cross-dataset generalization, and deployment practicality—achieving state-of-the-art performance without architectural modification or multimodal fine-tuning.

Technology Category

Application Category

📝 Abstract
Composed Image Retrieval (CIR) aims to retrieve a target image based on a reference image and conditioning text, enabling controllable searches. Due to the expensive dataset construction cost for CIR triplets, a zero-shot (ZS) CIR setting has been actively studied to eliminate the need for human-collected triplet datasets. The mainstream of ZS-CIR employs an efficient projection module that projects a CLIP image embedding to the CLIP text token embedding space, while fixing the CLIP encoders. Using the projected image embedding, these methods generate image-text composed features by using the pre-trained text encoder. However, their CLIP image and text encoders suffer from the task discrepancy between the pre-training task (text $leftrightarrow$ image) and the target CIR task (image + text $leftrightarrow$ image). Conceptually, we need expensive triplet samples to reduce the discrepancy, but we use cheap text triplets instead and update the text encoder. To that end, we introduce the Reducing Task Discrepancy of text encoders for Composed Image Retrieval (RTD), a plug-and-play training scheme for the text encoder that enhances its capability using a novel target-anchored text contrastive learning. We also propose two additional techniques to improve the proposed learning scheme: a hard negatives-based refined batch sampling strategy and a sophisticated concatenation scheme. Integrating RTD into the state-of-the-art projection-based ZS-CIR methods significantly improves performance across various datasets and backbones, demonstrating its efficiency and generalizability.
Problem

Research questions and friction points this paper is trying to address.

Reduces task discrepancy in text encoders for Composed Image Retrieval.
Enhances text encoder capability using target-anchored contrastive learning.
Improves training efficiency with minimal additional resources.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Text-only post-hoc framework reduces task discrepancy.
Target-anchored text contrastive learning enhances encoder.
Hard negative sampling and refined concatenation improve training.
🔎 Similar Papers
J
Jaeseok Byun
Department of ECE, Seoul National University
S
Seokhyeon Jeong
Department of ECE, Seoul National University
Wonjae Kim
Wonjae Kim
TwelveLabs
Machine LearningHuman-Computer Interaction
Sanghyuk Chun
Sanghyuk Chun
Princeton University
Machine LearningDeep Learning
T
Taesup Moon
Department of ECE, Seoul National University, Department of ASRI/INMC/IPAI/AIIS, Seoul National University