Multimodal Transformers are Hierarchical Modal-wise Heterogeneous Graphs

📅 2025-05-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multimodal sentiment analysis (MSA), Multimodal Transformers (MulTs) suffer from computational redundancy and parameter explosion, hindering efficiency. This work establishes, for the first time, the theoretical equivalence between MulTs and Hierarchical Modality-Heterogeneous Graphs (HMHGs), thereby proposing a graph-structured representation paradigm. We design an Interleaved Masking (IM) mechanism to enable holistic, end-to-end multimodal fusion, and introduce weight-sharing Transformers coupled with a Triton-accelerated decomposition kernel—achieving parameter compression at zero computational overhead. The resulting lightweight model, GsiT, reduces parameters by 67% (retaining only one-third of the original) while significantly outperforming state-of-the-art MulTs on benchmark MSA datasets including CMU-MOSEI and IEMOCAP. GsiT thus achieves superior accuracy without compromising inference efficiency.

Technology Category

Application Category

📝 Abstract
Multimodal Sentiment Analysis (MSA) is a rapidly developing field that integrates multimodal information to recognize sentiments, and existing models have made significant progress in this area. The central challenge in MSA is multimodal fusion, which is predominantly addressed by Multimodal Transformers (MulTs). Although act as the paradigm, MulTs suffer from efficiency concerns. In this work, from the perspective of efficiency optimization, we propose and prove that MulTs are hierarchical modal-wise heterogeneous graphs (HMHGs), and we introduce the graph-structured representation pattern of MulTs. Based on this pattern, we propose an Interlaced Mask (IM) mechanism to design the Graph-Structured and Interlaced-Masked Multimodal Transformer (GsiT). It is formally equivalent to MulTs which achieves an efficient weight-sharing mechanism without information disorder through IM, enabling All-Modal-In-One fusion with only 1/3 of the parameters of pure MulTs. A Triton kernel called Decomposition is implemented to ensure avoiding additional computational overhead. Moreover, it achieves significantly higher performance than traditional MulTs. To further validate the effectiveness of GsiT itself and the HMHG concept, we integrate them into multiple state-of-the-art models and demonstrate notable performance improvements and parameter reduction on widely used MSA datasets.
Problem

Research questions and friction points this paper is trying to address.

Efficiency optimization in Multimodal Transformers for sentiment analysis
Proposing hierarchical modal-wise heterogeneous graphs for multimodal fusion
Reducing parameters while improving performance in multimodal models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes hierarchical modal-wise heterogeneous graphs (HMHGs)
Introduces Interlaced Mask mechanism for GsiT
Achieves All-Modal-In-One fusion efficiently
🔎 Similar Papers
No similar papers found.
Yijie Jin
Yijie Jin
Incoming Ph.D. in Shanghai Jiao Tong University (SJTU)
Efficient Generative AIMachine Learning SystemsMultimodal LearningNatural Language Processing
Junjie Peng
Junjie Peng
Shanghai University
X
Xuanchao Lin
School of Computer Engineering and Science, Shanghai University
H
Haochen Yuan
School of Computer Engineering and Science, Shanghai University
L
Lan Wang
School of Computer Engineering and Science, Shanghai University
C
Cangzhi Zheng
School of Computer Engineering and Science, Shanghai University