🤖 AI Summary
This work addresses the dual ambiguity inherent in multimodal emotion recognition—stemming from inter-annotator disagreement and cross-modal (speech-text) conflict—which existing methods struggle to model jointly. To tackle this challenge, the authors propose AmbER², a novel framework that explicitly and simultaneously captures both annotator-level and modality-level emotional uncertainty. Leveraging a teacher-student architecture, label distribution learning, and a distribution-level loss function, AmbER² achieves robust recognition of highly ambiguous samples. Experiments on IEMOCAP and MSP-Podcast demonstrate substantial improvements over standard cross-entropy baselines: Bhattacharyya coefficient increases by 20.3%, R² by 13.6%, and accuracy and F1-score by 3.8% and 4.5%, respectively, with particularly strong performance on high-uncertainty instances.
📝 Abstract
Emotion recognition is inherently ambiguous, with uncertainty arising both from rater disagreement and from discrepancies across modalities such as speech and text. There is growing interest in modeling rater ambiguity using label distributions. However, modality ambiguity remains underexplored, and multimodal approaches often rely on simple feature fusion without explicitly addressing conflicts between modalities. In this work, we propose AmbER$^2$, a dual ambiguity-aware framework that simultaneously models rater-level and modality-level ambiguity through a teacher-student architecture with a distribution-wise training objective. Evaluations on IEMOCAP and MSP-Podcast show that AmbER$^2$ consistently improves distributional fidelity over conventional cross-entropy baselines and achieves performance competitive with, or superior to, recent state-of-the-art systems. For example, on IEMOCAP, AmbER$^2$ achieves relative improvements of 20.3% on Bhattacharyya coefficient (0.83 vs. 0.69), 13.6% on R$^2$ (0.67 vs. 0.59), 3.8% on accuracy (0.683 vs. 0.658), and 4.5% on F1 (0.675 vs. 0.646). Further analysis across ambiguity levels shows that explicitly modeling ambiguity is particularly beneficial for highly uncertain samples. These findings highlight the importance of jointly addressing rater and modality ambiguity when building robust emotion recognition systems.