🤖 AI Summary
This work addresses the challenge of 3D human pose estimation under non-stationary background music, where dynamic acoustic field aliasing severely degrades estimation accuracy. To this end, we propose a non-intrusive audio-driven pose estimation framework that—unlike existing methods requiring specialized acoustic excitation—leverages everyday, non-stationary background music as an active sensing signal. Our approach introduces a contrastive pose extraction module and a band-wise attention mechanism to jointly model the coupling between acoustic dynamics and human motion, effectively disentangling intrinsic musical variations from motion-induced acoustic perturbations. By integrating contrastive learning, hard negative mining, and frequency-domain adaptive modeling, our method achieves significant improvements over state-of-the-art approaches across diverse real-world scenarios, demonstrating strong generalization capability and practical deployability. We will publicly release both the source code and the first benchmark dataset for background-music-driven pose estimation.
📝 Abstract
We propose BGM2Pose, a non-invasive 3D human pose estimation method using arbitrary music (e.g., background music) as active sensing signals. Unlike existing approaches that significantly limit practicality by employing intrusive chirp signals within the audible range, our method utilizes natural music that causes minimal discomfort to humans. Estimating human poses from standard music presents significant challenges. In contrast to sound sources specifically designed for measurement, regular music varies in both volume and pitch. These dynamic changes in signals caused by music are inevitably mixed with alterations in the sound field resulting from human motion, making it hard to extract reliable cues for pose estimation. To address these challenges, BGM2Pose introduces a Contrastive Pose Extraction Module that employs contrastive learning and hard negative sampling to eliminate musical components from the recorded data, isolating the pose information. Additionally, we propose a Frequency-wise Attention Module that enables the model to focus on subtle acoustic variations attributable to human movement by dynamically computing attention across frequency bands. Experiments suggest that our method outperforms the existing methods, demonstrating substantial potential for real-world applications. Our datasets and code will be made publicly available.