🤖 AI Summary
This work addresses the limitation of existing emotion recognition methods that predominantly focus on single emotions and struggle to capture the structured co-occurrence patterns of multiple emotions in real-world scenarios. To this end, we propose a memory-guided prototypical co-occurrence learning framework that, for the first time, integrates cognitive memory mechanisms with prototype-based learning. Our approach employs multi-scale associative memory to fuse multimodal signals, constructs an emotion-specific prototype memory bank, and introduces memory retrieval alongside prototype relation distillation to explicitly model semantic co-occurrence, valence consistency, and structural correlations among emotions. Evaluated on two public datasets, the proposed method significantly outperforms state-of-the-art models, with both quantitative and qualitative results demonstrating its effectiveness and superiority in recognizing complex, mixed emotions.
📝 Abstract
Emotion recognition from multi-modal physiological and behavioral signals plays a pivotal role in affective computing, yet most existing models remain constrained to the prediction of singular emotions in controlled laboratory settings. Real-world human emotional experiences, by contrast, are often characterized by the simultaneous presence of multiple affective states, spurring recent interest in mixed emotion recognition as an emotion distribution learning problem. Current approaches, however, often neglect the valence consistency and structured correlations inherent among coexisting emotions. To address this limitation, we propose a Memory-guided Prototypical Co-occurrence Learning (MPCL) framework that explicitly models emotion co-occurrence patterns. Specifically, we first fuse multi-modal signals via a multi-scale associative memory mechanism. To capture cross-modal semantic relationships, we construct emotion-specific prototype memory banks, yielding rich physiological and behavioral representations, and employ prototype relation distillation to ensure cross-modal alignment in the latent prototype space. Furthermore, inspired by human cognitive memory systems, we introduce a memory retrieval strategy to extract semantic-level co-occurrence associations across emotion categories. Through this bottom-up hierarchical abstraction process, our model learns affectively informative representations for accurate emotion distribution prediction. Comprehensive experiments on two public datasets demonstrate that MPCL consistently outperforms state-of-the-art methods in mixed emotion recognition, both quantitatively and qualitatively.