ARNet: Self-Supervised FG-SBIR with Unified Sample Feature Alignment and Multi-Scale Token Recycling

πŸ“… 2024-06-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the large sketch–image semantic gap, difficulty in cross-domain feature alignment, model saturation, and insufficient multi-scale information exploitation in fine-grained sketch-based image retrieval (FG-SBIR), this paper proposes a unified intra- and inter-sample feature alignment framework with multi-scale token reuse. Our key contributions are: (1) a novel joint mutual information sharing paradigm that simultaneously optimizes intra-sample structural consistency and inter-sample semantic alignment; (2) a Multi-Scale Token Recycling (MSTR) module that recovers and reuses patch tokens discarded during downsampling to enhance fine-grained discriminability; and (3) a dual-weight-sharing CNN/ViT backbone coupled with multi-scale feature resampling to mitigate training saturation. Extensive experiments demonstrate significant improvements over state-of-the-art methods on multiple benchmarks and a newly constructed fashion dataset, Cloths-V1. The framework exhibits strong architectural compatibility and generalization capability.

Technology Category

Application Category

πŸ“ Abstract
Fine-Grained Sketch-Based Image Retrieval (FG-SBIR) aims to minimize the distance between sketches and corresponding images in the embedding space. However, scalability is hindered by the growing complexity of solutions, mainly due to the abstract nature of fine-grained sketches. In this paper, we propose an effective approach to narrow the gap between the two domains. It mainly facilitates unified mutual information sharing both intra- and inter-samples, rather than treating them as a single feature alignment problem between modalities. Specifically, our approach includes: (i) Employing dual weight-sharing networks to optimize alignment within the sketch and image domain, which also effectively mitigates model learning saturation issues. (ii) Introducing an objective optimization function based on contrastive loss to enhance the model's ability to align features in both intra- and inter-samples. (iii) Presenting a self-supervised Multi-Scale Token Recycling (MSTR) Module featured by recycling discarded patch tokens in multi-scale features, further enhancing representation capability and retrieval performance. Our framework achieves excellent results on CNN- and ViT-based backbones. Extensive experiments demonstrate its superiority over existing methods. We also introduce Cloths-V1, the first professional fashion sketch-image dataset, utilized to validate our method and will be beneficial for other applications.
Problem

Research questions and friction points this paper is trying to address.

Fine-Grained Sketch-Based Image Retrieval
Feature Alignment
Multi-Scale Information Utilization
Innovation

Methods, ideas, or system contributions that make the work stand out.

ARNet
Fine-Grained Sketch-Based Image Retrieval (FG-SBIR)
Self-learning Image Feature Alignment
πŸ”Ž Similar Papers
No similar papers found.
J
Jianan Jiang
Hunan University, ExponentiAI Innovation
H
Hao Tang
Peking University
Z
Zhilin Jiang
Hunan University
Weiren Yu
Weiren Yu
University of Warwick
D
Di Wu
Hunan University, ExponentiAI Innovation