π€ AI Summary
To address the large sketchβimage semantic gap, difficulty in cross-domain feature alignment, model saturation, and insufficient multi-scale information exploitation in fine-grained sketch-based image retrieval (FG-SBIR), this paper proposes a unified intra- and inter-sample feature alignment framework with multi-scale token reuse. Our key contributions are: (1) a novel joint mutual information sharing paradigm that simultaneously optimizes intra-sample structural consistency and inter-sample semantic alignment; (2) a Multi-Scale Token Recycling (MSTR) module that recovers and reuses patch tokens discarded during downsampling to enhance fine-grained discriminability; and (3) a dual-weight-sharing CNN/ViT backbone coupled with multi-scale feature resampling to mitigate training saturation. Extensive experiments demonstrate significant improvements over state-of-the-art methods on multiple benchmarks and a newly constructed fashion dataset, Cloths-V1. The framework exhibits strong architectural compatibility and generalization capability.
π Abstract
Fine-Grained Sketch-Based Image Retrieval (FG-SBIR) aims to minimize the distance between sketches and corresponding images in the embedding space. However, scalability is hindered by the growing complexity of solutions, mainly due to the abstract nature of fine-grained sketches. In this paper, we propose an effective approach to narrow the gap between the two domains. It mainly facilitates unified mutual information sharing both intra- and inter-samples, rather than treating them as a single feature alignment problem between modalities. Specifically, our approach includes: (i) Employing dual weight-sharing networks to optimize alignment within the sketch and image domain, which also effectively mitigates model learning saturation issues. (ii) Introducing an objective optimization function based on contrastive loss to enhance the model's ability to align features in both intra- and inter-samples. (iii) Presenting a self-supervised Multi-Scale Token Recycling (MSTR) Module featured by recycling discarded patch tokens in multi-scale features, further enhancing representation capability and retrieval performance. Our framework achieves excellent results on CNN- and ViT-based backbones. Extensive experiments demonstrate its superiority over existing methods. We also introduce Cloths-V1, the first professional fashion sketch-image dataset, utilized to validate our method and will be beneficial for other applications.