🤖 AI Summary
To address the significant modality gap and underutilization of intermediate representations in visible-infrared person re-identification (VI-ReID), this paper proposes a modality-transfer representation learning framework. Our method leverages pre-trained visible-to-infrared image translation models—not for end-to-end fine-tuning or parameter expansion—but to generate intermediate translated images that serve as modality-bridging proxies, guiding the backbone network to learn cross-modal discriminative and aligned features. We introduce two novel losses: a modality-transfer contrastive loss and a modality-query regularization loss, both explicitly enforcing feature consistency between translated and real images. Extensive experiments on three standard benchmarks—RegDB, SYSU-MM01, and LLCM—demonstrate consistent and substantial improvements over state-of-the-art methods, validating the effectiveness, generalizability, and inference efficiency of our approach.
📝 Abstract
Visible-infrared person re-identification (VI-ReID) technique could associate the pedestrian images across visible and infrared modalities in the practical scenarios of background illumination changes. However, a substantial gap inherently exists between these two modalities. Besides, existing methods primarily rely on intermediate representations to align cross-modal features of the same person. The intermediate feature representations are usually create by generating intermediate images (kind of data enhancement), or fusing intermediate features (more parameters, lack of interpretability), and they do not make good use of the intermediate features. Thus, we propose a novel VI-ReID framework via Modality-Transition Representation Learning (MTRL) with a middle generated image as a transmitter from visible to infrared modals, which are fully aligned with the original visible images and similar to the infrared modality. After that, using a modality-transition contrastive loss and a modality-query regularization loss for training, which could align the cross-modal features more effectively. Notably, our proposed framework does not need any additional parameters, which achieves the same inference speed to the backbone while improving its performance on VI-ReID task. Extensive experimental results illustrate that our model significantly and consistently outperforms existing SOTAs on three typical VI-ReID datasets.