TiCAL:Typicality-Based Consistency-Aware Learning for Multimodal Emotion Recognition

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal emotion recognition (MER) methods overlook inter-modal emotional conflicts and suffer from training bias induced by unified supervision labels, particularly on highly inconsistent samples. To address this, we propose TiCAL—a typicality-guided, consistency-aware framework that emulates the human staged emotional perception mechanism. TiCAL innovatively integrates pseudo-unimodal emotion label generation, dynamic modality-consistency assessment, and typicality estimation, all within a hyperbolic hypersphere space for fine-grained emotional representation. A consistency-weighted loss function optimizes multimodal fusion, effectively mitigating supervision noise from conflicting samples. Extensive experiments demonstrate that TiCAL significantly outperforms state-of-the-art methods—e.g., surpassing DMD by 2.6% on CMU-MOSEI and MER2023 benchmarks—with particularly strong gains on high-conflict samples. This work establishes a novel paradigm for robust, conflict-aware multimodal emotion modeling.

Technology Category

Application Category

📝 Abstract
Multimodal Emotion Recognition (MER) aims to accurately identify human emotional states by integrating heterogeneous modalities such as visual, auditory, and textual data. Existing approaches predominantly rely on unified emotion labels to supervise model training, often overlooking a critical challenge: inter-modal emotion conflicts, wherein different modalities within the same sample may express divergent emotional tendencies. In this work, we address this overlooked issue by proposing a novel framework, Typicality-based Consistent-aware Multimodal Emotion Recognition (TiCAL), inspired by the stage-wise nature of human emotion perception. TiCAL dynamically assesses the consistency of each training sample by leveraging pseudo unimodal emotion labels alongside a typicality estimation. To further enhance emotion representation, we embed features in a hyperbolic space, enabling the capture of fine-grained distinctions among emotional categories. By incorporating consistency estimates into the learning process, our method improves model performance, particularly on samples exhibiting high modality inconsistency. Extensive experiments on benchmark datasets, e.g, CMU-MOSEI and MER2023, validate the effectiveness of TiCAL in mitigating inter-modal emotional conflicts and enhancing overall recognition accuracy, e.g., with about 2.6% improvements over the state-of-the-art DMD.
Problem

Research questions and friction points this paper is trying to address.

Addressing inter-modal emotion conflicts in multimodal emotion recognition
Dynamically assessing training sample consistency using typicality estimation
Enhancing emotion representation through hyperbolic space feature embedding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic consistency assessment using pseudo unimodal labels
Hyperbolic space embedding for fine-grained emotion distinctions
Stage-wise learning inspired by human emotion perception
🔎 Similar Papers
No similar papers found.
W
Wen Yin
The Laboratory of Intelligent Collaborative Computing of UESTC
S
Siyu Zhan
The Laboratory of Intelligent Collaborative Computing of UESTC
C
Cencen Liu
The Laboratory of Intelligent Collaborative Computing of UESTC
X
Xin Hu
The Laboratory of Intelligent Collaborative Computing of UESTC
G
Guiduo Duan
The Laboratory of Intelligent Collaborative Computing of UESTC, Ubiquitous Intelligence and Trusted Services Key Laboratory of Sichuan Province
X
Xiurui Xie
The Laboratory of Intelligent Collaborative Computing of UESTC
Yuan-Fang Li
Yuan-Fang Li
Oracle | Monash University
Large language modelKnowledge graphsnatural language processing
T
Tao He
The Laboratory of Intelligent Collaborative Computing of UESTC, Ubiquitous Intelligence and Trusted Services Key Laboratory of Sichuan Province