STMI: Segmentation-Guided Token Modulation with Cross-Modal Hypergraph Interaction for Multi-Modal Object Re-Identification

📅 2026-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the loss of discriminative cues and susceptibility to background interference in multimodal person re-identification, which often arises from rigid token filtering or simplistic feature fusion. To overcome these limitations, the authors propose a novel framework that leverages segmentation masks generated by SAM to guide learnable attention modulation, enabling adaptive redistribution of semantic tokens without discarding any information. Furthermore, a cross-modal hypergraph neural network is introduced to effectively model high-order semantic relationships among modalities. Extensive experiments demonstrate that the proposed method significantly outperforms existing approaches on benchmark datasets including RGBNT201, RGBNT100, and MSVR310, achieving notable improvements in both accuracy and robustness for multimodal re-identification.

Technology Category

Application Category

📝 Abstract
Multi-modal object Re-Identification (ReID) aims to exploit complementary information from different modalities to retrieve specific objects. However, existing methods often rely on hard token filtering or simple fusion strategies, which can lead to the loss of discriminative cues and increased background interference. To address these challenges, we propose STMI, a novel multi-modal learning framework consisting of three key components: (1) Segmentation-Guided Feature Modulation (SFM) module leverages SAM-generated masks to enhance foreground representations and suppress background noise through learnable attention modulation; (2) Semantic Token Reallocation (STR) module employs learnable query tokens and an adaptive reallocation mechanism to extract compact and informative representations without discarding any tokens; (3) Cross-Modal Hypergraph Interaction (CHI) module constructs a unified hypergraph across modalities to capture high-order semantic relationships. Extensive experiments on public benchmarks (i.e., RGBNT201, RGBNT100, and MSVR310) demonstrate the effectiveness and robustness of our proposed STMI framework in multi-modal ReID scenarios.
Problem

Research questions and friction points this paper is trying to address.

Multi-modal Re-Identification
Token Filtering
Background Interference
Feature Fusion
Discriminative Cues
Innovation

Methods, ideas, or system contributions that make the work stand out.

Segmentation-Guided Feature Modulation
Semantic Token Reallocation
Cross-Modal Hypergraph Interaction
Multi-Modal Re-Identification
Learnable Attention Modulation
🔎 Similar Papers
No similar papers found.