🤖 AI Summary
To address the substantial modality gap and insufficient spatiotemporal modeling in video-based visible-infrared person re-identification (VVI-ReID), this paper proposes X-ReID, a cross-modal feature learning framework. Methodologically, X-ReID employs a vision-language model as its backbone to jointly encode cross-modal semantics and video temporal structure. Its key contributions are: (1) a cross-modal prototype collaboration mechanism that enables fine-grained modality alignment via shared semantic prototypes; and (2) a multi-granularity spatiotemporal interaction module that jointly models short-term adjacent-frame dynamics and long-range cross-frame dependencies to enhance discriminative dynamic representation. Extensive experiments demonstrate state-of-the-art performance on the HITSZ-VCM and BUPTCampus benchmarks, achieving significant gains in cross-modal matching accuracy. The source code is publicly available.
📝 Abstract
Large-scale vision-language models (e.g., CLIP) have recently achieved remarkable performance in retrieval tasks, yet their potential for Video-based Visible-Infrared Person Re-Identification (VVI-ReID) remains largely unexplored. The primary challenges are narrowing the modality gap and leveraging spatiotemporal information in video sequences. To address the above issues, in this paper, we propose a novel cross-modality feature learning framework named X-ReID for VVI-ReID. Specifically, we first propose a Cross-modality Prototype Collaboration (CPC) to align and integrate features from different modalities, guiding the network to reduce the modality discrepancy. Then, a Multi-granularity Information Interaction (MII) is designed, incorporating short-term interactions from adjacent frames, long-term cross-frame information fusion, and cross-modality feature alignment to enhance temporal modeling and further reduce modality gaps. Finally, by integrating multi-granularity information, a robust sequence-level representation is achieved. Extensive experiments on two large-scale VVI-ReID benchmarks (i.e., HITSZ-VCM and BUPTCampus) demonstrate the superiority of our method over state-of-the-art methods. The source code is released at https://github.com/AsuradaYuci/X-ReID.