🤖 AI Summary
This study addresses the need for emotion regulation by investigating MindCube—a tangible, interactive stress-reduction device designed for affective research—as a musical controller. We propose two sound-mapping strategies: (1) rule-based, non-AI mapping, which employs real-time multimodal sensor data to modulate audio parameters; and (2) generative-AI–enhanced semantic latent-space mapping, wherein latent representations are imbued with affective semantics and support navigable, emotion-guided sound synthesis. Two real-time affective auditory feedback systems were implemented and empirically validated for both efficacy and scalability. To our knowledge, this work constitutes the first integration of generative AI with a physical interactive interface for emotion-driven music interaction. It establishes a novel paradigm for affective music computing and provides a reproducible technical pipeline and design framework for empathic human–computer audio interfaces.
📝 Abstract
In this work, we explore the musical interface potential of the MindCube, an interactive device designed to study emotions. Embedding diverse sensors and input devices, this interface resembles a fidget cube toy commonly used to help users relieve their stress and anxiety. As such, it is a particularly well-suited controller for musical systems that aim to help with emotion regulation. In this regard, we present two different mappings for the MindCube, with and without AI. With our generative AI mapping, we propose a way to infuse meaning within a latent space and techniques to navigate through it with an external controller. We discuss our results and propose directions for future work.