🤖 AI Summary
Multimodal Music Emotion Recognition (MMER) faces critical challenges including scarce labeled data, limited multimodal corpora, and insufficient real-time performance; existing models remain suboptimal in robustness, scalability, and interpretability. To address these, this work proposes the first structured, four-stage unified framework—comprising data selection, cross-modal feature extraction, feature fusion, and emotion prediction—that systematically integrates heterogeneous modalities (audio, text, visual, and physiological signals) via deep learning–driven co-modelling. Through a comprehensive review of over 100 studies, we delineate the technical evolution trajectory, affirm the centrality of deep learning and advanced fusion strategies, and explicitly identify current bottlenecks and future research directions. The framework delivers both theoretical rigor and practical feasibility, enabling applications in adaptive music recommendation, emotion-aware therapeutic systems, and intelligent entertainment.
📝 Abstract
Multimodal music emotion recognition (MMER) is an emerging discipline in music information retrieval that has experienced a surge in interest in recent years. This survey provides a comprehensive overview of the current state-of-the-art in MMER. Discussing the different approaches and techniques used in this field, the paper introduces a four-stage MMER framework, including multimodal data selection, feature extraction, feature processing, and final emotion prediction. The survey further reveals significant advancements in deep learning methods and the increasing importance of feature fusion techniques. Despite these advancements, challenges such as the need for large annotated datasets, datasets with more modalities, and real-time processing capabilities remain. This paper also contributes to the field by identifying critical gaps in current research and suggesting potential directions for future research. The gaps underscore the importance of developing robust, scalable, a interpretable models for MMER, with implications for applications in music recommendation systems, therapeutic tools, and entertainment.