FIGROTD: A Friendly-to-Handle Dataset for Image Guided Retrieval with Optional Text

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of lightweight, balanced benchmarks and methods for image-guided retrieval with optional text (IGROT), this work proposes: (1) FIGROTD, the first lightweight benchmark jointly supporting vision-only retrieval and compositional retrieval (with text), covering three key scenarios—compositional image retrieval (CIR), sketch-based image retrieval (SBIR), and compositional SBIR (CSTBIR); (2) Variance-Guided Feature Masking (VaGFeM), a novel mechanism that dynamically enhances discriminative feature dimensions based on inter-sample variance; and (3) a dual-objective training strategy integrating InfoNCE and triplet loss to balance performance across both query modalities. Evaluated on nine benchmarks, our approach significantly outperforms strong baselines—e.g., achieving 34.8 mAP@10 on CIRCO and 75.7 mAP@200 on Sketchy—demonstrating both the practical utility of FIGROTD and the generalizability of our method.

Technology Category

Application Category

📝 Abstract
Image-Guided Retrieval with Optional Text (IGROT) unifies visual retrieval (without text) and composed retrieval (with text). Despite its relevance in applications like Google Image and Bing, progress has been limited by the lack of an accessible benchmark and methods that balance performance across subtasks. Large-scale datasets such as MagicLens are comprehensive but computationally prohibitive, while existing models often favor either visual or compositional queries. We introduce FIGROTD, a lightweight yet high-quality IGROT dataset with 16,474 training triplets and 1,262 test triplets across CIR, SBIR, and CSTBIR. To reduce redundancy, we propose the Variance Guided Feature Mask (VaGFeM), which selectively enhances discriminative dimensions based on variance statistics. We further adopt a dual-loss design (InfoNCE + Triplet) to improve compositional reasoning. Trained on FIGROTD, VaGFeM achieves competitive results on nine benchmarks, reaching 34.8 mAP@10 on CIRCO and 75.7 mAP@200 on Sketchy, outperforming stronger baselines despite fewer triplets.
Problem

Research questions and friction points this paper is trying to address.

Unifies visual and composed retrieval tasks
Addresses lack of accessible IGROT benchmark
Balances performance across subtasks effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces lightweight dataset FIGROTD for image-text retrieval
Proposes Variance Guided Feature Mask to enhance discriminative dimensions
Uses dual-loss design combining InfoNCE and Triplet loss
🔎 Similar Papers
No similar papers found.