🤖 AI Summary
Existing person retrieval methods are limited to unimodal queries (image-only or text-only), failing to meet diverse real-world demands. This paper introduces zero-shot compositional person retrieval (ZS-CPR), a novel cross-modal task that jointly leverages visual and textual cues to localize target individuals without requiring manually annotated image–text query pairs. To address this, we propose Word4Per, a two-stage framework: (1) a lightweight Text Inversion Network (TINet) generates semantically aligned textual embeddings; (2) CLIP is fine-tuned for efficient cross-modal matching. Furthermore, we construct ITCPR—the first fine-grained, human-annotated benchmark for compositional person retrieval. Extensive experiments demonstrate that our method achieves substantial improvements over state-of-the-art approaches, with Rank-1 and mAP gains exceeding 10% on standard metrics. The code and dataset are publicly released.
📝 Abstract
Searching for specific person has great social benefits and security value, and it often involves a combination of visual and textual information. Conventional person retrieval methods, whether image-based or text-based, usually fall short in effectively harnessing both types of information, leading to the loss of accuracy. In this paper, a whole new task called Composed Person Retrieval (CPR) is proposed to jointly utilize both image and text information for target person retrieval. However, the supervised CPR requires very costly manual annotation dataset, while there are currently no available resources. To mitigate this issue, we firstly introduce the Zero-shot Composed Person Retrieval (ZS-CPR), which leverages existing domain-related data to resolve the CPR problem without expensive annotations. Secondly, to learn ZS-CPR model, we propose a two-stage learning framework, Word4Per, where a lightweight Textual Inversion Network (TINet) and a text-based person retrieval model based on fine-tuned Contrastive Language-Image Pre-training (CLIP) network are learned without utilizing any CPR data. Thirdly, a finely annotated Image-Text Composed Person Retrieval (ITCPR) dataset is built as the benchmark to assess the performance of the proposed Word4Per framework. Extensive experiments under both Rank-1 and mAP demonstrate the effectiveness of Word4Per for the ZS-CPR task, surpassing the comparative methods by over 10%. The code and ITCPR dataset will be publicly available at https://github.com/Delong-liu-bupt/Word4Per.