๐ค AI Summary
To address inefficiencies in multi-task feature fusion, class imbalance, and semantic confusion in natural-scene speech emotion recognition (SER), this paper proposes a multi-task collaborative learning framework. Methodologically, we design a collaborative attention module that enables context-aware, dynamic feature fusion between the primary emotion classification task and auxiliary tasksโincluding gender identification, speaker verification, and automatic speech recognition (ASR). We further introduce sample-weighted focal contrastive loss (SWFC) to jointly mitigate class imbalance and inter-class semantic confusion. The framework is end-to-end fine-tuned on self-supervised pretrained models (e.g., wav2vec 2.0) under a multi-task objective. Evaluated on the SER-Naturalistic challenge, our approach achieves significant improvements in emotion classification accuracy and generalization robustness, demonstrating the effectiveness of collaborative task modeling and customized loss design.
๐ Abstract
This study investigates fine-tuning self-supervised learn ing (SSL) models using multi-task learning (MTL) to enhance
speech emotion recognition (SER). The framework simultane ously handles four related tasks: emotion recognition, gender
recognition, speaker verification, and automatic speech recog nition. An innovative co-attention module is introduced to dy namically capture the interactions between features from the
primary emotion classification task and auxiliary tasks, en abling context-aware fusion. Moreover, We introduce the Sam ple Weighted Focal Contrastive (SWFC) loss function to ad dress class imbalance and semantic confusion by adjusting sam ple weights for difficult and minority samples. The method is
validated on the Categorical Emotion Recognition task of the
Speech Emotion Recognition in Naturalistic Conditions Chal lenge, showing significant performance improvements.