Addressing Emotion Bias in Music Emotion Recognition and Generation with Frechet Audio Distance

๐Ÿ“… 2024-09-23
๐Ÿ“ˆ Citations: 3
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Music emotion recognition (MER) and emotion-aware music generation (EMG) suffer from emotion bias due to overreliance on single audio encoders or subjective evaluation metrics. Method: This paper proposes a unified evaluation and optimization framework: (i) it introduces Multi-Encoder Frรฉchet Audio Distance (ME-FAD), a reference-free, objective metric that fuses representations from multiple pretrained audio encoders to quantify emotional consistency; and (ii) it designs an enhanced EMG model that explicitly optimizes alignment of generated musicโ€™s emotional distributions across these complementary encoder spaces. Contribution/Results: Experiments demonstrate that the proposed approach significantly mitigates emotion bias in both MER and EMG tasks. It outperforms two classes of state-of-the-art baselines in emotional diversity, expressive realism, and cross-encoder robustness. By establishing a reproducible, generalizable paradigm for emotion evaluation and generation, this work advances principled modeling of musical affect.

Technology Category

Application Category

๐Ÿ“ Abstract
The complex nature of musical emotion introduces inherent bias in both recognition and generation, particularly when relying on a single audio encoder, emotion classifier, or evaluation metric. In this work, we conduct a study on Music Emotion Recognition (MER) and Emotional Music Generation (EMG), employing diverse audio encoders alongside Frechet Audio Distance (FAD), a reference-free evaluation metric. Our study begins with a benchmark evaluation of MER, highlighting the limitations of using a single audio encoder and the disparities observed across different measurements. We then propose assessing MER performance using FAD derived from multiple encoders to provide a more objective measure of musical emotion. Furthermore, we introduce an enhanced EMG approach designed to improve both the variability and prominence of generated musical emotion, thereby enhancing its realism. Additionally, we investigate the differences in realism between the emotions conveyed in real and synthetic music, comparing our EMG model against two baseline models. Experimental results underscore the issue of emotion bias in both MER and EMG and demonstrate the potential of using FAD and diverse audio encoders to evaluate musical emotion more objectively and effectively.
Problem

Research questions and friction points this paper is trying to address.

Addressing inherent bias in music emotion recognition and generation
Evaluating musical emotion objectively using Frechet Audio Distance
Improving variability and realism in emotional music generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Frechet Audio Distance for emotion evaluation
Employs diverse audio encoders to reduce bias
Enhances Emotional Music Generation variability and realism
๐Ÿ”Ž Similar Papers
No similar papers found.