🤖 AI Summary
To address overfitting and degraded generalization caused by full fine-tuning of CLIP in text-to-person retrieval (TPR), this paper proposes a unified parameter-efficient transfer learning (PETL) framework. The method introduces a novel three-module collaborative architecture—comprising Prefix, LoRA, and Adapter—unified for TPR. It further designs S-Prefix to enhance prefix gradient propagation and L-Adapter to mitigate inter-module interference, enabling joint optimization of local prompts and global representations. Using only 4.7% trainable parameters, the approach achieves state-of-the-art performance on CUHK-PEDES, ICFG-PEDES, and RSTPReid, significantly outperforming both full fine-tuning and existing PETL methods. This work represents the first systematic integration and enhancement of the three dominant PETL techniques—Prefix tuning, LoRA, and Adapter—in the TPR domain, effectively balancing knowledge transfer efficiency and task-specific adaptation.
📝 Abstract
Text-based Person Retrieval (TPR) as a multi-modal task, which aims to retrieve the target person from a pool of candidate images given a text description, has recently garnered considerable attention due to the progress of contrastive visual-language pre-trained model. Prior works leverage pre-trained CLIP to extract person visual and textual features and fully fine-tune the entire network, which have shown notable performance improvements compared to uni-modal pre-training models. However, full-tuning a large model is prone to overfitting and hinders the generalization ability. In this paper, we propose a novel Unified Parameter-Efficient Transfer Learning (PETL) method for Text-based Person Retrieval (UP-Person) to thoroughly transfer the multi-modal knowledge from CLIP. Specifically, UP-Person simultaneously integrates three lightweight PETL components including Prefix, LoRA and Adapter, where Prefix and LoRA are devised together to mine local information with task-specific information prompts, and Adapter is designed to adjust global feature representations. Additionally, two vanilla submodules are optimized to adapt to the unified architecture of TPR. For one thing, S-Prefix is proposed to boost attention of prefix and enhance the gradient propagation of prefix tokens, which improves the flexibility and performance of the vanilla prefix. For another thing, L-Adapter is designed in parallel with layer normalization to adjust the overall distribution, which can resolve conflicts caused by overlap and interaction among multiple submodules. Extensive experimental results demonstrate that our UP-Person achieves state-of-the-art results across various person retrieval datasets, including CUHK-PEDES, ICFG-PEDES and RSTPReid while merely fine-tuning 4.7% parameters. Code is available at https://github.com/Liu-Yating/UP-Person.