🤖 AI Summary
To address the scarcity of large-scale, publicly available audio–lyrics bimodal datasets for Music Emotion Recognition (MER), this paper introduces MERGE—the first open-source, large-scale bimodal dataset supporting static emotion classification. We propose a semi-automated construction pipeline and release three aligned subsets: audio-only, lyrics-only, and audio–lyrics bimodal, along with standardized train/validation/test splits. For modeling, we systematically benchmark conventional handcrafted features with SVM/RF against deep unimodal (CNN, LSTM) and bimodal fusion architectures, including a novel dual-stream deep neural network. Our dual-stream model achieves a 79.21% macro-F1 score on MERGE, significantly outperforming prior approaches and validating the dataset’s utility. MERGE establishes a new benchmark for bimodal MER research and enables reproducible, large-scale evaluation of multimodal emotion modeling techniques.
📝 Abstract
The Music Emotion Recognition (MER) field has seen steady developments in recent years, with contributions from feature engineering, machine learning, and deep learning. The landscape has also shifted from audio-centric systems to bimodal ensembles that combine audio and lyrics. However, a severe lack of public and sizeable bimodal databases has hampered the development and improvement of bimodal audio-lyrics systems. This article proposes three new audio, lyrics, and bimodal MER research datasets, collectively called MERGE, created using a semi-automatic approach. To comprehensively assess the proposed datasets and establish a baseline for benchmarking, we conducted several experiments for each modality, using feature engineering, machine learning, and deep learning methodologies. In addition, we propose and validate fixed train-validate-test splits. The obtained results confirm the viability of the proposed datasets, achieving the best overall result of 79.21% F1-score for bimodal classification using a deep neural network.