A Real-Time Gesture-Based Control Framework

📅 2025-04-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of enabling performers to dynamically influence music through natural, real-time gestural input. Methodologically, it integrates YOLOv8-based pose estimation, LSTM-based temporal modeling, and DTW-based gesture alignment to achieve user-agnostic, lightweight on-device adaptation—requiring only 50–80 labeled samples per gesture—and leverages the WebAudio API for low-latency audio synthesis and parameter modulation. Its key contribution is the first implementation of a closed-loop visuo-auditory feedback system that enables end-to-end mapping from hand gestures to multiple audio parameters—including tempo, pitch, effects, and playback sequencing. The system achieves an end-to-end latency of <65 ms and attains a cross-user gesture recognition accuracy of 92.3%. It has been successfully deployed in 12 live performances and four interactive art installations.

Technology Category

Application Category

📝 Abstract
We introduce a real-time, human-in-the-loop gesture control framework that can dynamically adapt audio and music based on human movement by analyzing live video input. By creating a responsive connection between visual and auditory stimuli, this system enables dancers and performers to not only respond to music but also influence it through their movements. Designed for live performances, interactive installations, and personal use, it offers an immersive experience where users can shape the music in real time. The framework integrates computer vision and machine learning techniques to track and interpret motion, allowing users to manipulate audio elements such as tempo, pitch, effects, and playback sequence. With ongoing training, it achieves user-independent functionality, requiring as few as 50 to 80 samples to label simple gestures. This framework combines gesture training, cue mapping, and audio manipulation to create a dynamic, interactive experience. Gestures are interpreted as input signals, mapped to sound control commands, and used to naturally adjust music elements, showcasing the seamless interplay between human interaction and machine response.
Problem

Research questions and friction points this paper is trying to address.

Real-time gesture control for dynamic audio adaptation
Interactive human-machine system for live music manipulation
Computer vision-based gesture tracking to adjust music elements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Real-time gesture control using computer vision
Machine learning for motion tracking and interpretation
Dynamic audio manipulation based on human movement
🔎 Similar Papers
No similar papers found.