๐ค AI Summary
Existing methods for clothing-changing person re-identification (CC-ReID) under non-overlapping camera views rely on manually annotated clothing labels or identity-related auxiliary modalities, limiting practicality and generalizability.
Method: We propose a fully self-supervised semantic mining framework that requires no auxiliary information or human annotations. Specifically, we design a Semantic Mining and Refinement (SMR) module to autonomously disentangle and refine identity-relevant semantic content from appearance-salient semantics; further, we introduce a ContentโSalience Semantic Collaboration (CSSC) architecture to enable cross-branch interaction and joint optimization of multiple semantic representations.
Contribution/Results: Our method achieves state-of-the-art performance on three mainstream CC-ReID benchmarks, significantly outperforming prior approaches. To the best of our knowledge, it is the first end-to-end, fully unsupervised solution for clothing-change-robust person re-identification, eliminating reliance on external supervision while maintaining high discriminability and robustness.
๐ Abstract
Cloth-changing person re-identification aims at recognizing the same person with clothing changes across non-overlapping cameras. Advanced methods either resort to identity-related auxiliary modalities (e.g., sketches, silhouettes, and keypoints) or clothing labels to mitigate the impact of clothes. However, relying on unpractical and inflexible auxiliary modalities or annotations limits their real-world applicability. In this paper, we promote cloth-changing person re-identification by leveraging abundant semantics present within pedestrian images, without the need for any auxiliaries. Specifically, we first propose a unified Semantics Mining and Refinement (SMR) module to extract robust identity-related content and salient semantics, mitigating interference from clothing appearances effectively. We further propose the Content and Salient Semantics Collaboration (CSSC) framework to collaborate and leverage various semantics, facilitating cross-parallel semantic interaction and refinement. Our proposed method achieves state-of-the-art performance on three cloth-changing benchmarks, demonstrating its superiority over advanced competitors. The code is available at https://github.com/QizaoWang/CSSC-CCReID.