🤖 AI Summary
Multimodal sentiment recognition faces two key challenges: high modality heterogeneity and weak unimodal sentiment representations. To address these, we propose a hybrid network model based on multipath cross-modal interaction. Our method introduces three core components: (1) a multipath cross-modal interaction mechanism leveraging adversarial autoencoders (AAEs) to learn modality-invariant features; (2) a cross-modal gated mixture model (CGMM) that dynamically captures sentiment correlations across modalities while suppressing modality-specific discrepancies; and (3) a feature fusion module (FFM) designed to enhance discriminative representation learning. Extensive experiments on the SIMS and MOSI benchmarks demonstrate that our model achieves state-of-the-art performance across multiple metrics—including accuracy and F1-score—outperforming existing approaches. Ablation studies further confirm the effectiveness of each component in mitigating modality disparity and strengthening unimodal sentiment expressiveness.
📝 Abstract
Multimodal emotion recognition is crucial for future human-computer interaction. However, accurate emotion recognition still faces significant challenges due to differences between different modalities and the difficulty of characterizing unimodal emotional information. To solve these problems, a hybrid network model based on multipath cross-modal interaction (MCIHN) is proposed. First, adversarial autoencoders (AAE) are constructed separately for each modality. The AAE learns discriminative emotion features and reconstructs the features through a decoder to obtain more discriminative information about the emotion classes. Then, the latent codes from the AAE of different modalities are fed into a predefined Cross-modal Gate Mechanism model (CGMM) to reduce the discrepancy between modalities, establish the emotional relationship between interacting modalities, and generate the interaction features between different modalities. Multimodal fusion using the Feature Fusion module (FFM) for better emotion recognition. Experiments were conducted on publicly available SIMS and MOSI datasets, demonstrating that MCIHN achieves superior performance.