Beyond Semantic Search: Towards Referential Anchoring in Composed Image Retrieval

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing compositional image retrieval methods, which prioritize semantic matching but struggle to reliably preserve user-specified object instances across diverse scenes. To this end, we introduce Object-Aware Compositional Image Retrieval (OACIR), a novel fine-grained task that explicitly emphasizes instance-level fidelity. We further present OACIRR, the first large-scale, multi-domain benchmark dataset tailored for this objective. To tackle OACIR, we propose the AdaFocal framework, which leverages bounding boxes from reference images as visual anchors and employs a context-aware attention modulator to dynamically enhance focus on these anchored regions. Additionally, we devise a hard negative-augmented candidate gallery construction strategy. Experiments demonstrate that AdaFocal substantially outperforms current approaches, achieving superior instance preservation while maintaining compositional semantics, thereby establishing a new baseline for instance-aware retrieval systems.
📝 Abstract
Composed Image Retrieval (CIR) has demonstrated significant potential by enabling flexible multimodal queries that combine a reference image and modification text. However, CIR inherently prioritizes semantic matching, struggling to reliably retrieve a user-specified instance across contexts. In practice, emphasizing concrete instance fidelity over broad semantics is often more consequential. In this work, we propose Object-Anchored Composed Image Retrieval (OACIR), a novel fine-grained retrieval task that mandates strict instance-level consistency. To advance research on this task, we construct OACIRR (OACIR on Real-world images), the first large-scale, multi-domain benchmark comprising over 160K quadruples and four challenging candidate galleries enriched with hard-negative instance distractors. Each quadruple augments the compositional query with a bounding box that visually anchors the object in the reference image, providing a precise and flexible way to ensure instance preservation. To address the OACIR task, we propose AdaFocal, a framework featuring a Context-Aware Attention Modulator that adaptively intensifies attention within the specified instance region, dynamically balancing focus between the anchored instance and the broader compositional context. Extensive experiments demonstrate that AdaFocal substantially outperforms existing compositional retrieval models, particularly in maintaining instance-level fidelity, thereby establishing a robust baseline for this challenging task while opening new directions for more flexible, instance-aware retrieval systems.
Problem

Research questions and friction points this paper is trying to address.

Composed Image Retrieval
instance-level consistency
referential anchoring
semantic matching
fine-grained retrieval
Innovation

Methods, ideas, or system contributions that make the work stand out.

Composed Image Retrieval
Instance-Level Consistency
Referential Anchoring
AdaFocal
OACIRR
🔎 Similar Papers
No similar papers found.