Semi-Supervised Self-Learning Enhanced Music Emotion Recognition

📅 2024-10-29
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Music Emotion Recognition (MER) faces two key challenges: (1) limited scale of publicly available datasets, and (2) prevalent segment-level labeling strategies that uniformly inherit the global track-level emotion label to all constituent segments—ignoring emotion’s temporal dynamics and thereby introducing severe label noise and overfitting. To address these issues, we propose a semi-supervised self-training framework explicitly designed for music’s time-varying emotional characteristics. Our approach introduces, for the first time, a segment-level self-training denoising mechanism: without requiring additional annotations, it dynamically evaluates label confidence and selects reliable samples to automatically identify and discard erroneous segment-level labels. The method integrates segment-level feature modeling with iterative pseudo-label refinement. Evaluated on three benchmark public datasets, it achieves state-of-the-art (SOTA) or competitive performance, significantly improving accuracy and generalization robustness—particularly in low-data regimes.

Technology Category

Application Category

📝 Abstract
Music emotion recognition (MER) aims to identify the emotions conveyed in a given musical piece. However, currently, in the field of MER, the available public datasets have limited sample sizes. Recently, segment-based methods for emotion-related tasks have been proposed, which train backbone networks on shorter segments instead of entire audio clips, thereby naturally augmenting training samples without requiring additional resources. Then, the predicted segment-level results are aggregated to obtain the entire song prediction. The most commonly used method is that the segment inherits the label of the clip containing it, but music emotion is not constant during the whole clip. Doing so will introduce label noise and make the training easy to overfit. To handle the noisy label issue, we propose a semi-supervised self-learning (SSSL) method, which can differentiate between samples with correct and incorrect labels in a self-learning manner, thus effectively utilizing the augmented segment-level data. Experiments on three public emotional datasets demonstrate that the proposed method can achieve better or comparable performance.
Problem

Research questions and friction points this paper is trying to address.

Limited sample sizes in public MER datasets
Label noise from segment-level emotion inconsistency
Overfitting due to inaccurate segment labeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Segment-based training for natural data augmentation
Semi-supervised self-learning to handle label noise
Aggregating segment-level predictions for song-level results
🔎 Similar Papers
No similar papers found.
Y
Yifu Sun
Fudan University, Shanghai, China; Ping An Technology Co., Ltd., Shenzhen, China
Xulong Zhang
Xulong Zhang
Ping An Technology (Shenzhen) Co., Ltd.
Federated Large ModelsTrusted ComputingGraph Computing
M
Monan Zhou
Central Conservatory of Music, Beijing, China
W
Wei Li
Fudan University, Shanghai, China; Central Conservatory of Music, Beijing, China