AffectMachine-Pop: A controllable expert system for real-time pop music generation

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses three key challenges in pop music generation: weak emotional controllability, poor real-time performance, and lack of interpretability. We propose the first controllable, interpretable, low-latency expert system for pop music generation grounded in continuous affective dimensions—arousal and valence. Methodologically, it integrates music-theoretic rule modeling, an emotion–music mapping engine, real-time physiological interfaces (HRV/EEG), and a lightweight scheduling framework, enabling both preset-emotion-driven generation and closed-loop biophysiological feedback regulation. Contributions include: (1) the first explicit, human-interpretable, and editable mapping from the arousal–valence space to retro-style pop music; (2) end-to-end latency under 300 ms; (3) statistically significant emotional alignment in listening experiments (p < 0.01) with a mean opinion score (MOS) of 4.2/5; and (4) successful integration into a neurofeedback prototype, demonstrating practical utility in interactive affective composition and emotion regulation.

Technology Category

Application Category

📝 Abstract
Music is a powerful medium for influencing listeners' emotional states, and this capacity has driven a surge of research interest in AI-based affective music generation in recent years. Many existing systems, however, are a black box which are not directly controllable, thus making these systems less flexible and adaptive to users. We present extit{AffectMachine-Pop}, an expert system capable of generating retro-pop music according to arousal and valence values, which can either be pre-determined or based on a listener's real-time emotion states. To validate the efficacy of the system, we conducted a listening study demonstrating that AffectMachine-Pop is capable of generating affective music at target levels of arousal and valence. The system is tailored for use either as a tool for generating interactive affective music based on user input, or for incorporation into biofeedback or neurofeedback systems to assist users with emotion self-regulation.
Problem

Research questions and friction points this paper is trying to address.

Generates pop music based on emotional states
Provides controllable AI music generation system
Validates system efficacy through listening study
Innovation

Methods, ideas, or system contributions that make the work stand out.

Expert system for real-time pop music generation
Controllable via arousal and valence values
Supports interactive and biofeedback applications
🔎 Similar Papers
No similar papers found.
K
Kathleen Agres
Centre for Music and Health, Yong Siew Toh Conservatory of Music, National University of Singapore, Singapore, Singapore
Adyasha Dash
Adyasha Dash
KIIT Deemed to be University
Text miningmachine learningBig data analytics
Phoebe Chua
Phoebe Chua
National University of Singapore
S
Stefan K. Ehrlich
SETLabs Research GmbH, Munich, Germany