Multimodal Skeleton-Based Action Representation Learning via Decomposition and Composition

📅 2025-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of balancing performance and efficiency in multimodal skeleton-based action recognition, this paper proposes a self-supervised decomposition–reconstruction learning framework. The method introduces a novel bidirectional self-supervision mechanism—“Decomposition + Composition”: it decomposes fused features into modality-aligned ground-truth representations while reconstructing unimodal features to guide multimodal representation learning. This enables modeling of cross-modal complementarity and computational efficiency optimization without additional annotations. Built upon a shared backbone network, the framework jointly optimizes contrastive alignment loss and skeleton sequence modeling. Extensive experiments demonstrate state-of-the-art accuracy on NTU RGB+D 60/120 and PKU-MMD II, with a 37% inference speedup and 29% parameter reduction compared to late- and early-fusion baselines, significantly advancing both accuracy and efficiency.

Technology Category

Application Category

📝 Abstract
Multimodal human action understanding is a significant problem in computer vision, with the central challenge being the effective utilization of the complementarity among diverse modalities while maintaining model efficiency. However, most existing methods rely on simple late fusion to enhance performance, which results in substantial computational overhead. Although early fusion with a shared backbone for all modalities is efficient, it struggles to achieve excellent performance. To address the dilemma of balancing efficiency and effectiveness, we introduce a self-supervised multimodal skeleton-based action representation learning framework, named Decomposition and Composition. The Decomposition strategy meticulously decomposes the fused multimodal features into distinct unimodal features, subsequently aligning them with their respective ground truth unimodal counterparts. On the other hand, the Composition strategy integrates multiple unimodal features, leveraging them as self-supervised guidance to enhance the learning of multimodal representations. Extensive experiments on the NTU RGB+D 60, NTU RGB+D 120, and PKU-MMD II datasets demonstrate that the proposed method strikes an excellent balance between computational cost and model performance.
Problem

Research questions and friction points this paper is trying to address.

Balances efficiency and effectiveness in multimodal action learning
Decomposes fused features into distinct unimodal representations
Integrates unimodal features to enhance multimodal representation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes fused multimodal features into distinct unimodal features
Aligns unimodal features with their ground truth counterparts
Integrates unimodal features as self-supervised guidance for multimodal learning
🔎 Similar Papers
No similar papers found.
H
Hongsong Wang
School of Computer Science and Engineering, Southeast University, Nanjing 210096, China. Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China.
H
Heng Fei
School of Cyber Science and Engineering, Southeast University, Nanjing 210096, China.
B
Bingxuan Dai
School of Cyber Science and Engineering, Southeast University, Nanjing 210096, China.
Jie Gui
Jie Gui
Southeast University, China
Pattern Recognition and Machine LearningArtificial IntelligenceData MiningDeep LearningImage Processing and Computer Vis