MM-Gesture: Towards Precise Micro-Gesture Recognition through Multimodal Fusion

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Micro-gestures (MGs) pose significant recognition challenges due to their extremely short duration and subtle motion patterns. To address this, we propose a novel multimodal fusion framework integrating six complementary modalities: skeletal joint coordinates, limb motion trajectories, RGB video, Taylor-series-expanded video, optical flow, and depth video. We introduce a learnable modality-weighted ensemble strategy that jointly optimizes two backbone architectures—PoseConv3D for spatiotemporal skeletal modeling and Video Swin Transformer for appearance-based spatiotemporal reasoning. Furthermore, the RGB branch is pretrained on the large-scale MA-52 dataset to enhance generalization. Evaluated on the iMiGUE benchmark, our method achieves a top-1 accuracy of 73.213%, substantially outperforming prior state-of-the-art approaches. This work secured first place in the Micro-gesture Classification Track of the 3rd MiGA Challenge at IJCAI 2025.

Technology Category

Application Category

📝 Abstract
In this paper, we present MM-Gesture, the solution developed by our team HFUT-VUT, which ranked 1st in the micro-gesture classification track of the 3rd MiGA Challenge at IJCAI 2025, achieving superior performance compared to previous state-of-the-art methods. MM-Gesture is a multimodal fusion framework designed specifically for recognizing subtle and short-duration micro-gestures (MGs), integrating complementary cues from joint, limb, RGB video, Taylor-series video, optical-flow video, and depth video modalities. Utilizing PoseConv3D and Video Swin Transformer architectures with a novel modality-weighted ensemble strategy, our method further enhances RGB modality performance through transfer learning pre-trained on the larger MA-52 dataset. Extensive experiments on the iMiGUE benchmark, including ablation studies across different modalities, validate the effectiveness of our proposed approach, achieving a top-1 accuracy of 73.213%.
Problem

Research questions and friction points this paper is trying to address.

Recognizing subtle micro-gestures via multimodal fusion
Improving accuracy in short-duration gesture classification
Enhancing RGB modality with transfer learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal fusion for micro-gesture recognition
Modality-weighted ensemble strategy
Transfer learning with MA-52 dataset
🔎 Similar Papers
No similar papers found.