🤖 AI Summary
Existing MERC methods struggle to simultaneously capture cross-modal shared semantics and modality-specific affective cues (e.g., micro-expressions, prosodic variations, ironic language), leading to insufficient modeling of fine-grained emotions. To address this, we propose an orthogonal disentanglement and projection-based feature alignment framework: orthogonal constraints explicitly separate shared and modality-specific emotional subspaces; reconstruction loss, projection alignment loss, and cross-modal consistency loss jointly enforce structural fidelity and semantic coherence; and contrastive learning coupled with cross-attention mechanisms ensures robust multimodal fusion. Our method achieves significant improvements over state-of-the-art approaches on IEMOCAP and MELD, demonstrating superior capability in modeling subtle affective cues and strong generalizability across datasets. This work establishes a novel paradigm for multimodal dialogue emotion recognition grounded in principled disentanglement and aligned representation learning.
📝 Abstract
Multimodal Emotion Recognition in Conversation (MERC) significantly enhances emotion recognition performance by integrating complementary emotional cues from text, audio, and visual modalities. While existing methods commonly utilize techniques such as contrastive learning and cross-attention mechanisms to align cross-modal emotional semantics, they typically overlook modality-specific emotional nuances like micro-expressions, tone variations, and sarcastic language. To overcome these limitations, we propose Orthogonal Disentanglement with Projected Feature Alignment (OD-PFA), a novel framework designed explicitly to capture both shared semantics and modality-specific emotional cues. Our approach first decouples unimodal features into shared and modality-specific components. An orthogonal disentanglement strategy (OD) enforces effective separation between these components, aided by a reconstruction loss to maintain critical emotional information from each modality. Additionally, a projected feature alignment strategy (PFA) maps shared features across modalities into a common latent space and applies a cross-modal consistency alignment loss to enhance semantic coherence. Extensive evaluations on widely-used benchmark datasets, IEMOCAP and MELD, demonstrate effectiveness of our proposed OD-PFA multimodal emotion recognition tasks, as compared with the state-of-the-art approaches.