๐ค AI Summary
Music emotion recognition (MER) and emotion-aware music generation (EMG) suffer from emotion bias due to overreliance on single audio encoders or subjective evaluation metrics. Method: This paper proposes a unified evaluation and optimization framework: (i) it introduces Multi-Encoder Frรฉchet Audio Distance (ME-FAD), a reference-free, objective metric that fuses representations from multiple pretrained audio encoders to quantify emotional consistency; and (ii) it designs an enhanced EMG model that explicitly optimizes alignment of generated musicโs emotional distributions across these complementary encoder spaces. Contribution/Results: Experiments demonstrate that the proposed approach significantly mitigates emotion bias in both MER and EMG tasks. It outperforms two classes of state-of-the-art baselines in emotional diversity, expressive realism, and cross-encoder robustness. By establishing a reproducible, generalizable paradigm for emotion evaluation and generation, this work advances principled modeling of musical affect.
๐ Abstract
The complex nature of musical emotion introduces inherent bias in both recognition and generation, particularly when relying on a single audio encoder, emotion classifier, or evaluation metric. In this work, we conduct a study on Music Emotion Recognition (MER) and Emotional Music Generation (EMG), employing diverse audio encoders alongside Frechet Audio Distance (FAD), a reference-free evaluation metric. Our study begins with a benchmark evaluation of MER, highlighting the limitations of using a single audio encoder and the disparities observed across different measurements. We then propose assessing MER performance using FAD derived from multiple encoders to provide a more objective measure of musical emotion. Furthermore, we introduce an enhanced EMG approach designed to improve both the variability and prominence of generated musical emotion, thereby enhancing its realism. Additionally, we investigate the differences in realism between the emotions conveyed in real and synthetic music, comparing our EMG model against two baseline models. Experimental results underscore the issue of emotion bias in both MER and EMG and demonstrate the potential of using FAD and diverse audio encoders to evaluate musical emotion more objectively and effectively.