CCUP: A Controllable Synthetic Data Generation Pipeline for Pretraining Cloth-Changing Person Re-Identification Models

📅 2024-10-17
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address overfitting in cloth-changing person re-identification (CC-ReID) caused by scarce real annotations and high synthetic data costs, this paper proposes a low-cost, controllable synthetic data generation paradigm coupled with a scalable pretraining-finetuning framework. We introduce CCUP—the first large-scale self-annotated CC-ReID synthetic dataset—containing 6,000 identities and 1.18 million images, enabling fine-grained clothing variation and multi-view-consistent rendering. Leveraging virtual human modeling and multi-camera synthesis, our approach achieves outfit-level control and cross-camera appearance consistency. We adopt a two-stage training strategy using TransReID and FIRe² for pretraining and finetuning, respectively. Our method achieves state-of-the-art performance on PRCC, VC-Clothes, and NKUP, improving mAP by 8.2% and 6.7% over prior works, demonstrating strong generalization and practical applicability.

Technology Category

Application Category

📝 Abstract
Cloth-changing person re-identification (CC-ReID), also known as Long-Term Person Re-Identification (LT-ReID) is a critical and challenging research topic in computer vision that has recently garnered significant attention. However, due to the high cost of constructing CC-ReID data, the existing data-driven models are hard to train efficiently on limited data, causing overfitting issue. To address this challenge, we propose a low-cost and efficient pipeline for generating controllable and high-quality synthetic data simulating the surveillance of real scenarios specific to the CC-ReID task. Particularly, we construct a new self-annotated CC-ReID dataset named Cloth-Changing Unreal Person (CCUP), containing 6,000 IDs, 1,179,976 images, 100 cameras, and 26.5 outfits per individual. Based on this large-scale dataset, we introduce an effective and scalable pretrain-finetune framework for enhancing the generalization capabilities of the traditional CC-ReID models. The extensive experiments demonstrate that two typical models namely TransReID and FIRe^2, when integrated into our framework, outperform other state-of-the-art models after pretraining on CCUP and finetuning on the benchmarks such as PRCC, VC-Clothes and NKUP. The CCUP is available at: https://github.com/yjzhao1019/CCUP.
Problem

Research questions and friction points this paper is trying to address.

Generates synthetic data for cloth-changing person re-identification
Addresses data scarcity and overfitting in CC-ReID models
Proposes pretrain-finetune framework to enhance model generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates controllable synthetic CC-ReID data
Introduces scalable pretrain-finetune framework
Enhances generalization with large-scale dataset
🔎 Similar Papers
No similar papers found.
Y
Yujian Zhao
School of Artificial Intelligence, Beihang University, Beijing, China
C
Chengru Wu
Shen Yuan Honors College, Beihang University, Beijing, China
Y
Yinong Xu
Shen Yuan Honors College, Beihang University, Beijing, China
X
Xuanzheng Du
Shen Yuan Honors College, Beihang University, Beijing, China
Ruiyu Li
Ruiyu Li
SmartMore
Computer VisionDeep Learning
Guanglin Niu
Guanglin Niu
Assistant Professor, Beihang University
artificial intelligencenatural language processingknowledge graphdeep learningknowledge reasoning