MCIHN: A Hybrid Network Model Based on Multi-path Cross-modal Interaction for Multimodal Emotion Recognition

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal sentiment recognition faces two key challenges: high modality heterogeneity and weak unimodal sentiment representations. To address these, we propose a hybrid network model based on multipath cross-modal interaction. Our method introduces three core components: (1) a multipath cross-modal interaction mechanism leveraging adversarial autoencoders (AAEs) to learn modality-invariant features; (2) a cross-modal gated mixture model (CGMM) that dynamically captures sentiment correlations across modalities while suppressing modality-specific discrepancies; and (3) a feature fusion module (FFM) designed to enhance discriminative representation learning. Extensive experiments on the SIMS and MOSI benchmarks demonstrate that our model achieves state-of-the-art performance across multiple metrics—including accuracy and F1-score—outperforming existing approaches. Ablation studies further confirm the effectiveness of each component in mitigating modality disparity and strengthening unimodal sentiment expressiveness.

Technology Category

Application Category

📝 Abstract
Multimodal emotion recognition is crucial for future human-computer interaction. However, accurate emotion recognition still faces significant challenges due to differences between different modalities and the difficulty of characterizing unimodal emotional information. To solve these problems, a hybrid network model based on multipath cross-modal interaction (MCIHN) is proposed. First, adversarial autoencoders (AAE) are constructed separately for each modality. The AAE learns discriminative emotion features and reconstructs the features through a decoder to obtain more discriminative information about the emotion classes. Then, the latent codes from the AAE of different modalities are fed into a predefined Cross-modal Gate Mechanism model (CGMM) to reduce the discrepancy between modalities, establish the emotional relationship between interacting modalities, and generate the interaction features between different modalities. Multimodal fusion using the Feature Fusion module (FFM) for better emotion recognition. Experiments were conducted on publicly available SIMS and MOSI datasets, demonstrating that MCIHN achieves superior performance.
Problem

Research questions and friction points this paper is trying to address.

Addresses modality differences in multimodal emotion recognition
Enhances discriminative emotion feature learning through adversarial autoencoders
Reduces cross-modal discrepancies using interaction mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial autoencoders learn discriminative emotion features
Cross-modal gate mechanism reduces inter-modality discrepancies
Feature fusion module integrates multimodal information effectively
🔎 Similar Papers
No similar papers found.
Haoyang Zhang
Haoyang Zhang
Ph.D. student of Computer Science, University of Illinois Urbana-Champaign
Computer ArchitectureSystem Software
Z
Zhou Yang
Xi’an Jiaotong University, Xi’an, China
K
Ke Sun
University of New South Wales, Sydney, Australia
Y
Yucai Pang
Chongqing University of Posts and Telecommunications, Chongqing, China
G
Guoliang Xu
Chongqing University of Posts and Telecommunications, Chongqing, China