DLF: Disentangled-Language-Focused Multimodal Sentiment Analysis

📅 2024-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal sentiment analysis models suffer from performance degradation due to ineffective disentanglement of modality-specific and shared representations, leading to redundancy and cross-modal conflicts. To address this, we propose a “Disentangle–Language-Focused” framework: (1) geometrically constrained disentanglement of modality-specific and shared features via four differentiable geometric metrics; (2) a Language-Focused Attractor (LFA) that anchors cross-modal attention on linguistic features to enforce language-centric representation learning; and (3) a hierarchical prediction mechanism for enhanced fine-grained sentiment modeling. Our approach departs from conventional equal-weight multimodal fusion paradigms, significantly improving model robustness and interpretability. Extensive experiments demonstrate state-of-the-art performance on CMU-MOSI and CMU-MOSEI. Ablation studies validate the efficacy of each component, and the code is publicly available.

Technology Category

Application Category

📝 Abstract
Multimodal Sentiment Analysis (MSA) leverages heterogeneous modalities, such as language, vision, and audio, to enhance the understanding of human sentiment. While existing models often focus on extracting shared information across modalities or directly fusing heterogeneous modalities, such approaches can introduce redundancy and conflicts due to equal treatment of all modalities and the mutual transfer of information between modality pairs. To address these issues, we propose a Disentangled-Language-Focused (DLF) multimodal representation learning framework, which incorporates a feature disentanglement module to separate modality-shared and modality-specific information. To further reduce redundancy and enhance language-targeted features, four geometric measures are introduced to refine the disentanglement process. A Language-Focused Attractor (LFA) is further developed to strengthen language representation by leveraging complementary modality-specific information through a language-guided cross-attention mechanism. The framework also employs hierarchical predictions to improve overall accuracy. Extensive experiments on two popular MSA datasets, CMU-MOSI and CMU-MOSEI, demonstrate the significant performance gains achieved by the proposed DLF framework. Comprehensive ablation studies further validate the effectiveness of the feature disentanglement module, language-focused attractor, and hierarchical predictions. Our code is available at https://github.com/pwang322/DLF.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Sentiment Analysis
Modal-specific Information
Accuracy and Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decoupling-Language Focusing (DLF) framework
Multimodal Sentiment Analysis
Hierarchical Prediction and Linguistic Attractors
🔎 Similar Papers
No similar papers found.