🤖 AI Summary
This study addresses three key bottlenecks in music emotion computation: (1) limited musical stimuli—small copyrighted libraries and heuristic mapping biases; (2) unimodal neuroimaging—reliance solely on EEG; and (3) poor portability—bulky, high-channel gel-based EEG systems. To overcome these, we propose MEEtBrain: (1) AI-generated large-scale affective music to eliminate manual selection bias; (2) a lightweight, wireless, synchronous EEG-fNIRS headband using dry electrodes for real-world deployment; and (3) the first large-scale, open-source multimodal emotional brain dataset (44 participants, >14 hours). Our method jointly models dry-electrode EEG and fNIRS signals to accurately decode emotional valence and arousal. Results demonstrate significantly improved ecological validity and scalability, establishing a novel paradigm for music neuroscience and portable brain-computer interfaces.
📝 Abstract
Emotions critically influence mental health, driving interest in music-based affective computing via neurophysiological signals with Brain-computer Interface techniques. While prior studies leverage music's accessibility for emotion induction, three key limitations persist: extbf{(1) Stimulus Constraints}: Music stimuli are confined to small corpora due to copyright and curation costs, with selection biases from heuristic emotion-music mappings that ignore individual affective profiles. extbf{(2) Modality Specificity}: Overreliance on unimodal neural data (e.g., EEG) ignores complementary insights from cross-modal signal fusion. extbf{ (3) Portability Limitation}: Cumbersome setups (e.g., 64+ channel gel-based EEG caps) hinder real-world applicability due to procedural complexity and portability barriers. To address these limitations, we propose MEEtBrain, a portable and multimodal framework for emotion analysis (valence/arousal), integrating AI-generated music stimuli with synchronized EEG-fNIRS acquisition via a wireless headband. By MEEtBrain, the music stimuli can be automatically generated by AI on a large scale, eliminating subjective selection biases while ensuring music diversity. We use our developed portable device that is designed in a lightweight headband-style and uses dry electrodes, to simultaneously collect EEG and fNIRS recordings. A 14-hour dataset from 20 participants was collected in the first recruitment to validate the framework's efficacy, with AI-generated music eliciting target emotions (valence/arousal). We are actively expanding our multimodal dataset (44 participants in the latest dataset) and make it publicly available to promote further research and practical applications. extbf{The dataset is available at https://zju-bmi-lab.github.io/ZBra.