X-ReID: Multi-granularity Information Interaction for Video-Based Visible-Infrared Person Re-Identification

📅 2025-11-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the substantial modality gap and insufficient spatiotemporal modeling in video-based visible-infrared person re-identification (VVI-ReID), this paper proposes X-ReID, a cross-modal feature learning framework. Methodologically, X-ReID employs a vision-language model as its backbone to jointly encode cross-modal semantics and video temporal structure. Its key contributions are: (1) a cross-modal prototype collaboration mechanism that enables fine-grained modality alignment via shared semantic prototypes; and (2) a multi-granularity spatiotemporal interaction module that jointly models short-term adjacent-frame dynamics and long-range cross-frame dependencies to enhance discriminative dynamic representation. Extensive experiments demonstrate state-of-the-art performance on the HITSZ-VCM and BUPTCampus benchmarks, achieving significant gains in cross-modal matching accuracy. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Large-scale vision-language models (e.g., CLIP) have recently achieved remarkable performance in retrieval tasks, yet their potential for Video-based Visible-Infrared Person Re-Identification (VVI-ReID) remains largely unexplored. The primary challenges are narrowing the modality gap and leveraging spatiotemporal information in video sequences. To address the above issues, in this paper, we propose a novel cross-modality feature learning framework named X-ReID for VVI-ReID. Specifically, we first propose a Cross-modality Prototype Collaboration (CPC) to align and integrate features from different modalities, guiding the network to reduce the modality discrepancy. Then, a Multi-granularity Information Interaction (MII) is designed, incorporating short-term interactions from adjacent frames, long-term cross-frame information fusion, and cross-modality feature alignment to enhance temporal modeling and further reduce modality gaps. Finally, by integrating multi-granularity information, a robust sequence-level representation is achieved. Extensive experiments on two large-scale VVI-ReID benchmarks (i.e., HITSZ-VCM and BUPTCampus) demonstrate the superiority of our method over state-of-the-art methods. The source code is released at https://github.com/AsuradaYuci/X-ReID.
Problem

Research questions and friction points this paper is trying to address.

Reducing modality gap between visible and infrared video sequences
Leveraging spatiotemporal information from multi-frame video data
Enhancing cross-modality feature alignment for person re-identification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-modality Prototype Collaboration aligns features
Multi-granularity Information Interaction enhances temporal modeling
Integrates short-term and long-term cross-frame information
🔎 Similar Papers
No similar papers found.
Chenyang Yu
Chenyang Yu
Dalian University of Technology
Deep learning,person reidentification
Xuehu Liu
Xuehu Liu
武汉理工大学,大连理工大学
P
Pingping Zhang
School of Future Technology, Dalian University of Technology, Dalian, China
H
Huchuan Lu
School of Information and Communication Engineering, Dalian University of Technology, Dalian, China