Orthogonal Disentanglement with Projected Feature Alignment for Multimodal Emotion Recognition in Conversation

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing MERC methods struggle to simultaneously capture cross-modal shared semantics and modality-specific affective cues (e.g., micro-expressions, prosodic variations, ironic language), leading to insufficient modeling of fine-grained emotions. To address this, we propose an orthogonal disentanglement and projection-based feature alignment framework: orthogonal constraints explicitly separate shared and modality-specific emotional subspaces; reconstruction loss, projection alignment loss, and cross-modal consistency loss jointly enforce structural fidelity and semantic coherence; and contrastive learning coupled with cross-attention mechanisms ensures robust multimodal fusion. Our method achieves significant improvements over state-of-the-art approaches on IEMOCAP and MELD, demonstrating superior capability in modeling subtle affective cues and strong generalizability across datasets. This work establishes a novel paradigm for multimodal dialogue emotion recognition grounded in principled disentanglement and aligned representation learning.

Technology Category

Application Category

📝 Abstract
Multimodal Emotion Recognition in Conversation (MERC) significantly enhances emotion recognition performance by integrating complementary emotional cues from text, audio, and visual modalities. While existing methods commonly utilize techniques such as contrastive learning and cross-attention mechanisms to align cross-modal emotional semantics, they typically overlook modality-specific emotional nuances like micro-expressions, tone variations, and sarcastic language. To overcome these limitations, we propose Orthogonal Disentanglement with Projected Feature Alignment (OD-PFA), a novel framework designed explicitly to capture both shared semantics and modality-specific emotional cues. Our approach first decouples unimodal features into shared and modality-specific components. An orthogonal disentanglement strategy (OD) enforces effective separation between these components, aided by a reconstruction loss to maintain critical emotional information from each modality. Additionally, a projected feature alignment strategy (PFA) maps shared features across modalities into a common latent space and applies a cross-modal consistency alignment loss to enhance semantic coherence. Extensive evaluations on widely-used benchmark datasets, IEMOCAP and MELD, demonstrate effectiveness of our proposed OD-PFA multimodal emotion recognition tasks, as compared with the state-of-the-art approaches.
Problem

Research questions and friction points this paper is trying to address.

Captures shared and modality-specific emotional cues
Enhances semantic coherence across text, audio, visual modalities
Addresses overlooked nuances like micro-expressions and sarcastic language
Innovation

Methods, ideas, or system contributions that make the work stand out.

Orthogonal disentanglement separates shared and modality-specific features
Projected feature alignment maps cross-modal features into common space
Reconstruction and consistency losses preserve emotional information and coherence
🔎 Similar Papers
No similar papers found.
X
Xinyi Che
Sichuan University, Chengdu 610065, China
W
Wenbo Wang
Harbin Institute of Technology, Harbin 150001, China
J
Jian Guan
Harbin Engineering University, Harbin 150001, China
Qijun Zhao
Qijun Zhao
Professor of Computer Science, Sichuan University
Biometrics3D VisionObject Detection and RecognitionFace RecognitionFingerprint Recognition