🤖 AI Summary
Multimodal dialogue emotion recognition faces challenges in cross-modal alignment and fusion due to complex couplings among textual, acoustic, and visual signals. To address this, we propose the Visual Emotion-Guided Anchoring (VEGA) mechanism: leveraging a CLIP-based image encoder, VEGA constructs emotion-specific facial visual anchors—psychologically inspired visual prototypes serving as semantic priors to enable cross-modal psychological perception alignment. A stochastic anchor sampling strategy is introduced to balance intra-class diversity and semantic stability, while a dual-branch self-distillation architecture enhances generalization. Evaluated on IEMOCAP and MELD, VEGA achieves state-of-the-art performance, significantly outperforming existing methods. Results demonstrate both the effectiveness of prototype-guided multimodal fusion and the robustness of the proposed framework.
📝 Abstract
Multimodal Emotion Recognition in Conversations remains a challenging task due to the complex interplay of textual, acoustic and visual signals. While recent models have improved performance via advanced fusion strategies, they often lack psychologically meaningful priors to guide multimodal alignment. In this paper, we revisit the use of CLIP and propose a novel Visual Emotion Guided Anchoring (VEGA) mechanism that introduces class-level visual semantics into the fusion and classification process. Distinct from prior work that primarily utilizes CLIP's textual encoder, our approach leverages its image encoder to construct emotion-specific visual anchors based on facial exemplars. These anchors guide unimodal and multimodal features toward a perceptually grounded and psychologically aligned representation space, drawing inspiration from cognitive theories (prototypical emotion categories and multisensory integration). A stochastic anchor sampling strategy further enhances robustness by balancing semantic stability and intra-class diversity. Integrated into a dual-branch architecture with self-distillation, our VEGA-augmented model achieves sota performance on IEMOCAP and MELD. Code is available at: https://github.com/dkollias/VEGA.