Cross-Modal Synergies: Unveiling the Potential of Motion-Aware Fusion Networks in Handling Dynamic and Static ReID Scenarios

📅 2025-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the degradation of face/pedestrian re-identification (ReID) performance under frequent occlusions in surveillance scenarios, this paper proposes the Motion-Aware Fusion (MOTAR-FUSE) network, which for the first time implicitly models motion cues from single-frame static images to enhance feature discriminability. Our method introduces three key innovations: (1) a lightweight visual adapter with a dual-input architecture jointly encodes appearance and motion priors; (2) a motion-aware Transformer trained via a motion-consistency self-supervised task to learn dynamic human representations; and (3) unified modeling and cross-modal feature fusion across static images, occluded images, and video sequences. Evaluated on comprehensive ReID benchmarks—including full-scene, occlusion-specific, and video-based settings—MOTAR-FUSE achieves state-of-the-art performance. Notably, it delivers substantial gains in matching accuracy under severe occlusion, demonstrating robustness and generalizability across diverse surveillance conditions.

Technology Category

Application Category

📝 Abstract
Navigating the complexities of person re-identification (ReID) in varied surveillance scenarios, particularly when occlusions occur, poses significant challenges. We introduce an innovative Motion-Aware Fusion (MOTAR-FUSE) network that utilizes motion cues derived from static imagery to significantly enhance ReID capabilities. This network incorporates a dual-input visual adapter capable of processing both images and videos, thereby facilitating more effective feature extraction. A unique aspect of our approach is the integration of a motion consistency task, which empowers the motion-aware transformer to adeptly capture the dynamics of human motion. This technique substantially improves the recognition of features in scenarios where occlusions are prevalent, thereby advancing the ReID process. Our comprehensive evaluations across multiple ReID benchmarks, including holistic, occluded, and video-based scenarios, demonstrate that our MOTAR-FUSE network achieves superior performance compared to existing approaches.
Problem

Research questions and friction points this paper is trying to address.

Person Re-Identification
Occlusion Handling
Face Recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

MOTAR-FUSE
Perception and Action Fusion
Robust Face Re-identification
🔎 Similar Papers
No similar papers found.
F
Fuxi Ling
Hangzhou Dianzi University
Hongye Liu
Hongye Liu
Duke university
NLPMachine Learning
G
Guoqiang Huang
Zhejiang Gongshang University
J
Jing Li
Zhejiang Gongshang University
H
Hong Wu
Hangzhou Dianzi University
Z
Zhihao Tang
Hangzhou Dianzi University