Hierarchical Prompt Learning for Image- and Text-Based Person Re-Identification

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image-to-image (I2I) and text-to-image (T2I) person re-identification (re-ID) approaches are typically modeled separately, leading to entangled cross-modal representations and suboptimal performance. To address this, we propose the first unified multimodal re-ID framework. Our core innovation is a task-aware hierarchical prompt learning mechanism: (i) a task-routing Transformer dynamically routes image or text queries through dedicated paths; (ii) identity-level learnable prompts and instance-level pseudo-text prompts enable fine-grained semantic injection into a shared visual encoder; and (iii) cross-modal prompt regularization enforces semantic alignment and representation disentanglement in the prompt space. By eliminating modality-specific parameter redundancy, our method achieves state-of-the-art performance on CUHK-PEDES and RSTPReid benchmarks, with significant gains in both I2I and T2I retrieval accuracy and cross-modal generalization capability.

Technology Category

Application Category

📝 Abstract
Person re-identification (ReID) aims to retrieve target pedestrian images given either visual queries (image-to-image, I2I) or textual descriptions (text-to-image, T2I). Although both tasks share a common retrieval objective, they pose distinct challenges: I2I emphasizes discriminative identity learning, while T2I requires accurate cross-modal semantic alignment. Existing methods often treat these tasks separately, which may lead to representation entanglement and suboptimal performance. To address this, we propose a unified framework named Hierarchical Prompt Learning (HPL), which leverages task-aware prompt modeling to jointly optimize both tasks. Specifically, we first introduce a Task-Routed Transformer, which incorporates dual classification tokens into a shared visual encoder to route features for I2I and T2I branches respectively. On top of this, we develop a hierarchical prompt generation scheme that integrates identity-level learnable tokens with instance-level pseudo-text tokens. These pseudo-tokens are derived from image or text features via modality-specific inversion networks, injecting fine-grained, instance-specific semantics into the prompts. Furthermore, we propose a Cross-Modal Prompt Regularization strategy to enforce semantic alignment in the prompt token space, ensuring that pseudo-prompts preserve source-modality characteristics while enhancing cross-modal transferability. Extensive experiments on multiple ReID benchmarks validate the effectiveness of our method, achieving state-of-the-art performance on both I2I and T2I tasks.
Problem

Research questions and friction points this paper is trying to address.

Unifying image- and text-based person re-identification in one framework
Addressing representation entanglement between visual and textual tasks
Enhancing cross-modal semantic alignment while preserving modality characteristics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Task-Routed Transformer routes features for I2I and T2I
Hierarchical prompt generation integrates identity and instance tokens
Cross-Modal Prompt Regularization enforces semantic alignment in prompts
🔎 Similar Papers
No similar papers found.
L
Linhan Zhou
Faculty of Information Engineering and Automation, Kunming University of Science and Technology
S
Shuang Li
School of Computer Science and Technology, Chongqing University of Post and Telecommunication
Neng Dong
Neng Dong
Nanjing University of Science and Technology
Yonghang Tai
Yonghang Tai
Deakin University
Y
Yafei Zhang
Faculty of Information Engineering and Automation, Kunming University of Science and Technology
Huafeng Li
Huafeng Li
KUST
Computer VisionPattern RecognitionMachine Learning