Toward music-based stress management: Contemporary biosensing systems for affective regulation

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses critical gaps in current music-based emotion regulation and stress management research—namely, insufficient personalization, unclear neurophysiological mechanisms, and weak integration of sensing and AI technologies. We propose a closed-loop intervention framework integrating multimodal biosensing (e.g., cardiorespiratory wearables) with generative AI. Methodologically, we develop a real-time physiological signal–driven emotion recognition model (using HRV, respiration, etc.), which dynamically triggers personalized music generation and bidirectional biofeedback, supported by a desktop interface and curated music library. A systematic review of 28 studies (N=646) classifies technical approaches and empirically demonstrates the enhanced immersion and individual adaptability afforded by multimodal sensing coupled with AI-generated music. Key contributions include: (1) the first interpretable “physiology–emotion–music” mapping mechanism; (2) privacy-preserving design with explicit user agency; and (3) a methodological and empirical foundation for clinical translation.

Technology Category

Application Category

📝 Abstract
In the last decade, researchers have increasingly explored using biosensing technologies for music-based affective regulation and stress management interventions in laboratory and real-world settings. These systems -- including interactive music applications, brain-computer interfaces, and biofeedback devices -- aim to provide engaging, personalized experiences that improve therapeutic outcomes. In this scoping and mapping review, we summarize and synthesize systematic reviews and empirical research on biosensing systems with potential applications in music-based affective regulation and stress management, identify gaps in the literature, and highlight promising areas for future research. We identified 28 studies involving 646 participants, with most systems utilizing prerecorded music, wearable cardiorespiratory sensors, or desktop interfaces. We categorize these systems based on their biosensing modalities, music types, computational models for affect or stress detection and music prediction, and biofeedback mechanisms. Our findings highlight the promising potential of these systems and suggest future directions, such as integrating multimodal biosensing, exploring therapeutic mechanisms of music, leveraging generative artificial intelligence for personalized music interventions, and addressing methodological, data privacy, and user control concerns.
Problem

Research questions and friction points this paper is trying to address.

Exploring biosensing for music-based stress management
Reviewing biosensing systems for affective regulation
Identifying gaps in music-based therapeutic research
Innovation

Methods, ideas, or system contributions that make the work stand out.

Biosensing technologies for affective regulation
Interactive music applications and biofeedback devices
Generative AI for personalized music interventions
🔎 Similar Papers
No similar papers found.
Natasha Yamane
Natasha Yamane
Northeastern University, Columbia University Irving Medical Center
autismhealth informaticsdyadic analysisbiosensing
Varun Mishra
Varun Mishra
Northeastern University
Mobile SensingmHealth
M
Matthew S. Goodwin
Khoury College of Computer Sciences & Bouvé College of Health Sciences, Northeastern University, Boston, MA, USA