🤖 AI Summary
Speech emotion recognition (SER) faces dual challenges of high emotional complexity and scarce labeled data. To address these, we propose a multi-loss collaborative learning framework that innovatively integrates SNR-driven energy-adaptive Mixup data augmentation with frame-level attention mechanisms, significantly enhancing the model’s capacity to capture subtle emotional variations and improve feature discriminability. The framework jointly optimizes KL divergence, Focal Loss, Center Loss, and supervised contrastive loss to mitigate class imbalance and strengthen inter-class separation. Extensive experiments on four benchmark datasets—IEMOCAP, MSP-IMPROV, RAVDESS, and SAVEE—demonstrate state-of-the-art performance across all benchmarks, validating the method’s robustness and generalization capability.
📝 Abstract
Speech emotion recognition (SER) is an important technology in human-computer interaction. However, achieving high performance is challenging due to emotional complexity and scarce annotated data. To tackle these challenges, we propose a multi-loss learning (MLL) framework integrating an energy-adaptive mixup (EAM) method and a frame-level attention module (FLAM). The EAM method leverages SNR-based augmentation to generate diverse speech samples capturing subtle emotional variations. FLAM enhances frame-level feature extraction for multi-frame emotional cues. Our MLL strategy combines Kullback-Leibler divergence, focal, center, and supervised contrastive loss to optimize learning, address class imbalance, and improve feature separability. We evaluate our method on four widely used SER datasets: IEMOCAP, MSP-IMPROV, RAVDESS, and SAVEE. The results demonstrate our method achieves state-of-the-art performance, suggesting its effectiveness and robustness.