🤖 AI Summary
To address the limited robustness of single-modality person re-identification (ReID) under occlusion, illumination variation, and pose changes, this paper proposes the first cross-modal ReID framework integrating fine-grained semantic segmentation with vision-language multimodal collaborative modeling. Methodologically: (1) an end-to-end trainable cross-modal alignment module is designed, leveraging Transformers to achieve precise image–text feature mapping; (2) a multi-scale segmentation-guided attention mechanism is introduced, explicitly constraining visual feature learning via segmentation masks; and (3) a contrastive learning-driven embedding space optimization strategy is developed to enhance cross-modal semantic consistency. Evaluated on benchmarks including Market-1501, the method achieves significant improvements over state-of-the-art approaches—yielding higher Top-1 accuracy and mAP. Notably, segmentation IoU improves by 12.3%, and ReID performance under occlusion increases by over 8.5%.
📝 Abstract
Person re-identification (ReID) plays a critical role in applications like security surveillance and criminal investigations by matching individuals across large image galleries captured by non-overlapping cameras. Traditional ReID methods rely on unimodal inputs, typically images, but face limitations due to challenges like occlusions, lighting changes, and pose variations. While advancements in image-based and text-based ReID systems have been made, the integration of both modalities has remained under-explored. This paper presents FusionSegReID, a multimodal model that combines both image and text inputs for enhanced ReID performance. By leveraging the complementary strengths of these modalities, our model improves matching accuracy and robustness, particularly in complex, real-world scenarios where one modality may struggle. Our experiments show significant improvements in Top-1 accuracy and mean Average Precision (mAP) for ReID, as well as better segmentation results in challenging scenarios like occlusion and low-quality images. Ablation studies further confirm that multimodal fusion and segmentation modules contribute to enhanced re-identification and mask accuracy. The results show that FusionSegReID outperforms traditional unimodal models, offering a more robust and flexible solution for real-world person ReID tasks.