Wearable Music2Emotion : Assessing Emotions Induced by AI-Generated Music through Portable EEG-fNIRS Fusion

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses three key bottlenecks in music emotion computation: (1) limited musical stimuli—small copyrighted libraries and heuristic mapping biases; (2) unimodal neuroimaging—reliance solely on EEG; and (3) poor portability—bulky, high-channel gel-based EEG systems. To overcome these, we propose MEEtBrain: (1) AI-generated large-scale affective music to eliminate manual selection bias; (2) a lightweight, wireless, synchronous EEG-fNIRS headband using dry electrodes for real-world deployment; and (3) the first large-scale, open-source multimodal emotional brain dataset (44 participants, >14 hours). Our method jointly models dry-electrode EEG and fNIRS signals to accurately decode emotional valence and arousal. Results demonstrate significantly improved ecological validity and scalability, establishing a novel paradigm for music neuroscience and portable brain-computer interfaces.

Technology Category

Application Category

📝 Abstract
Emotions critically influence mental health, driving interest in music-based affective computing via neurophysiological signals with Brain-computer Interface techniques. While prior studies leverage music's accessibility for emotion induction, three key limitations persist: extbf{(1) Stimulus Constraints}: Music stimuli are confined to small corpora due to copyright and curation costs, with selection biases from heuristic emotion-music mappings that ignore individual affective profiles. extbf{(2) Modality Specificity}: Overreliance on unimodal neural data (e.g., EEG) ignores complementary insights from cross-modal signal fusion. extbf{ (3) Portability Limitation}: Cumbersome setups (e.g., 64+ channel gel-based EEG caps) hinder real-world applicability due to procedural complexity and portability barriers. To address these limitations, we propose MEEtBrain, a portable and multimodal framework for emotion analysis (valence/arousal), integrating AI-generated music stimuli with synchronized EEG-fNIRS acquisition via a wireless headband. By MEEtBrain, the music stimuli can be automatically generated by AI on a large scale, eliminating subjective selection biases while ensuring music diversity. We use our developed portable device that is designed in a lightweight headband-style and uses dry electrodes, to simultaneously collect EEG and fNIRS recordings. A 14-hour dataset from 20 participants was collected in the first recruitment to validate the framework's efficacy, with AI-generated music eliciting target emotions (valence/arousal). We are actively expanding our multimodal dataset (44 participants in the latest dataset) and make it publicly available to promote further research and practical applications. extbf{The dataset is available at https://zju-bmi-lab.github.io/ZBra.
Problem

Research questions and friction points this paper is trying to address.

Overcoming small music corpora and selection biases in emotion induction
Addressing unimodal neural data limitations with EEG-fNIRS fusion
Enhancing portability with lightweight wireless headband for real-world use
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI-generated music for emotion induction
Portable EEG-fNIRS fusion headband
Dry electrodes for wireless data collection
🔎 Similar Papers
No similar papers found.
S
Sha Zhao
Zhejiang University, China
S
Song Yi
Zhejiang University, China
Y
Yangxuan Zhou
Zhejiang University, China
J
Jiadong Pan
Hangzhou RongNao Technology Co., Ltd, China
J
Jiquan Wang
Zhejiang University, China
Jie Xia
Jie Xia
Zhejiang university
Brain-machine interfaceElectrophysiological signal processingFlexible electrodeFNIRS
Shijian Li
Shijian Li
zhejiang university
pervasive computinghuman computer interactionartificial intelligence
Shurong Dong
Shurong Dong
Zhejiang University
bioelectronicsSAW/FBARESD
Gang Pan
Gang Pan
Tianjin University
Computer visionMultimodalAI