🤖 AI Summary
In compositional image retrieval (CIR), two key challenges impede performance: (1) query feature degradation due to imbalance between visually dominant and noisy regions, and (2) visual attention bias arising from insufficient prioritization of textual modification intent. To address these, we propose a segmentation-guided focal transfer correction framework. First, image segmentation localizes salient regions; then, a vision–text dual-stream attention model is constructed, augmented with a text-guided adaptive focal recalibration module to enable dynamic focal transfer under cross-modal feature alignment. Our core innovation lies in tightly coupling segmentation priors with text-driven attention, explicitly modeling the semantic priority of textual modifications. Evaluated on four standard CIR benchmarks, our method consistently outperforms existing state-of-the-art approaches, demonstrating superior robustness and generalization capability—particularly for complex compositional queries involving fine-grained attribute modifications.
📝 Abstract
Composed Image Retrieval (CIR) represents a novel retrieval paradigm that is capable of expressing users' intricate retrieval requirements flexibly. It enables the user to give a multimodal query, comprising a reference image and a modification text, and subsequently retrieve the target image. Notwithstanding the considerable advances made by prevailing methodologies, CIR remains in its nascent stages due to two limitations: 1) inhomogeneity between dominant and noisy portions in visual data is ignored, leading to query feature degradation, and 2) the priority of textual data in the image modification process is overlooked, which leads to a visual focus bias. To address these two limitations, this work presents a focus mapping-based feature extractor, which consists of two modules: dominant portion segmentation and dual focus mapping. It is designed to identify significant dominant portions in images and guide the extraction of visual and textual data features, thereby reducing the impact of noise interference. Subsequently, we propose a textually guided focus revision module, which can utilize the modification requirements implied in the text to perform adaptive focus revision on the reference image, thereby enhancing the perception of the modification focus on the composed features. The aforementioned modules collectively constitute the segmentatiOn-based Focus shiFt reviSion nETwork (mbox{OFFSET}), and comprehensive experiments on four benchmark datasets substantiate the superiority of our proposed method. The codes and data are available on https://zivchen-ty.github.io/OFFSET.github.io/