🤖 AI Summary
Weakly supervised text-driven person re-identification (TPRe-ID) aims to achieve cross-modal image–text matching without identity labels, yet suffers from large intra-class variation and deep semantic gaps across modalities. This paper introduces CLIP into weakly supervised TPRe-ID for the first time, establishing a unified latent embedding space. We propose a Prototype-based Multimodal Memory (PMM) module to model prototype-level associations between image–text pairs. Furthermore, we design a Hybrid Cross-modal Matching (HCM) strategy and an Outlier Pseudo-Label Mining (OPLM) mechanism to enable robust multi-person, multi-image clustering and pseudo-label refinement. Our approach transcends instance-level learning limitations and significantly enhances cross-modal alignment. Extensive experiments demonstrate state-of-the-art performance: Rank@1 improvements of 11.58%, 8.77%, and 5.25% on CUHK-PEDES, ICFG-PEDES, and RSTPReid, respectively.
📝 Abstract
Weakly supervised text-based person re-identification (TPRe-ID) seeks to retrieve images of a target person using textual descriptions, without relying on identity annotations and is more challenging and practical. The primary challenge is the intra-class differences, encompassing intra-modal feature variations and cross-modal semantic gaps. Prior works have focused on instance-level samples and ignored prototypical features of each person which are intrinsic and invariant. Toward this, we propose a Cross-Modal Prototypical Contrastive Learning (CPCL) method. In practice, the CPCL introduces the CLIP model to weakly supervised TPRe-ID for the first time, mapping visual and textual instances into a shared latent space. Subsequently, the proposed Prototypical Multi-modal Memory (PMM) module captures associations between heterogeneous modalities of image-text pairs belonging to the same person through the Hybrid Cross-modal Matching (HCM) module in a many-to-many mapping fashion. Moreover, the Outlier Pseudo Label Mining (OPLM) module further distinguishes valuable outlier samples from each modality, enhancing the creation of more reliable clusters by mining implicit relationships between image-text pairs. Experimental results demonstrate that our proposed CPCL attains state-of-the-art performance on all three public datasets, with a significant improvement of 11.58%, 8.77% and 5.25% in Rank@1 accuracy on CUHK-PEDES, ICFG-PEDES and RSTPReid datasets, respectively. The code is available at https://github.com/codeGallery24/CPCL.