🤖 AI Summary
This study addresses critical gaps in current music-based emotion regulation and stress management research—namely, insufficient personalization, unclear neurophysiological mechanisms, and weak integration of sensing and AI technologies. We propose a closed-loop intervention framework integrating multimodal biosensing (e.g., cardiorespiratory wearables) with generative AI. Methodologically, we develop a real-time physiological signal–driven emotion recognition model (using HRV, respiration, etc.), which dynamically triggers personalized music generation and bidirectional biofeedback, supported by a desktop interface and curated music library. A systematic review of 28 studies (N=646) classifies technical approaches and empirically demonstrates the enhanced immersion and individual adaptability afforded by multimodal sensing coupled with AI-generated music. Key contributions include: (1) the first interpretable “physiology–emotion–music” mapping mechanism; (2) privacy-preserving design with explicit user agency; and (3) a methodological and empirical foundation for clinical translation.
📝 Abstract
In the last decade, researchers have increasingly explored using biosensing technologies for music-based affective regulation and stress management interventions in laboratory and real-world settings. These systems -- including interactive music applications, brain-computer interfaces, and biofeedback devices -- aim to provide engaging, personalized experiences that improve therapeutic outcomes. In this scoping and mapping review, we summarize and synthesize systematic reviews and empirical research on biosensing systems with potential applications in music-based affective regulation and stress management, identify gaps in the literature, and highlight promising areas for future research. We identified 28 studies involving 646 participants, with most systems utilizing prerecorded music, wearable cardiorespiratory sensors, or desktop interfaces. We categorize these systems based on their biosensing modalities, music types, computational models for affect or stress detection and music prediction, and biofeedback mechanisms. Our findings highlight the promising potential of these systems and suggest future directions, such as integrating multimodal biosensing, exploring therapeutic mechanisms of music, leveraging generative artificial intelligence for personalized music interventions, and addressing methodological, data privacy, and user control concerns.