Attribute Guidance With Inherent Pseudo-label For Occluded Person Re-identification

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In occluded person re-identification (Re-ID), pre-trained vision-language models often neglect fine-grained attributes, hindering discrimination between occluded pedestrians and visually similar individuals. To address this, we propose a fully unsupervised pseudo-label self-generation framework. Our method innovatively exploits the inherent fine-grained attribute comprehension capability of pre-trained models—previously unutilized in Re-ID—via a two-stage strategy to automatically generate attribute pseudo-labels without manual annotation. We further introduce a dual-guidance mechanism that jointly models global semantic features and local attribute representations, enhanced by contrastive learning to achieve fine-grained feature discrimination. Evaluated on multiple mainstream Re-ID benchmarks, our approach achieves state-of-the-art performance, significantly improving matching accuracy under occlusion while maintaining strong competitiveness in standard (non-occluded) scenarios.

Technology Category

Application Category

📝 Abstract
Person re-identification (Re-ID) aims to match person images across different camera views, with occluded Re-ID addressing scenarios where pedestrians are partially visible. While pre-trained vision-language models have shown effectiveness in Re-ID tasks, they face significant challenges in occluded scenarios by focusing on holistic image semantics while neglecting fine-grained attribute information. This limitation becomes particularly evident when dealing with partially occluded pedestrians or when distinguishing between individuals with subtle appearance differences. To address this limitation, we propose Attribute-Guide ReID (AG-ReID), a novel framework that leverages pre-trained models' inherent capabilities to extract fine-grained semantic attributes without additional data or annotations. Our framework operates through a two-stage process: first generating attribute pseudo-labels that capture subtle visual characteristics, then introducing a dual-guidance mechanism that combines holistic and fine-grained attribute information to enhance image feature extraction. Extensive experiments demonstrate that AG-ReID achieves state-of-the-art results on multiple widely-used Re-ID datasets, showing significant improvements in handling occlusions and subtle attribute differences while maintaining competitive performance on standard Re-ID scenarios.
Problem

Research questions and friction points this paper is trying to address.

Handling occluded person Re-ID by leveraging fine-grained attributes
Overcoming neglect of subtle features in vision-language Re-ID models
Improving Re-ID accuracy without extra data or annotations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages pre-trained models for attribute extraction
Generates attribute pseudo-labels without extra data
Dual-guidance combines holistic and fine-grained features
🔎 Similar Papers
No similar papers found.
R
Rui Zhi
Beijing University of Post and Telecommunication
Z
Zhen Yang
Beijing University of Post and Telecommunication
Haiyang Zhang
Haiyang Zhang
Nanjing University of Posts and Telecommunications
Wireless communication and signal processing