🤖 AI Summary
This work addresses the challenges of global translation inaccuracies caused by depth ambiguity and the neglect of inter-subject body shape variations in monocular visual-inertial motion capture. To overcome these limitations, we propose an end-to-end real-time system that fuses a stereo camera with six sparse IMUs. By leveraging stereo vision to resolve depth ambiguity, our method directly regresses 3D keypoints and estimates body shape parameters. We further introduce a shape-aware fusion module that dynamically balances individual morphological differences with global motion estimation. Evaluated across multiple datasets, the proposed approach achieves state-of-the-art performance, operates at over 200 FPS, exhibits no drift during long-term capture, and significantly suppresses foot sliding artifacts.
📝 Abstract
Recent advancements in visual-inertial motion capture systems have demonstrated the potential of combining monocular cameras with sparse inertial measurement units (IMUs) as cost-effective solutions, which effectively mitigate occlusion and drift issues inherent in single-modality systems. However, they are still limited by metric inaccuracies in global translations stemming from monocular depth ambiguity, and shape-agnostic local motion estimations that ignore anthropometric variations. We present Stereo-Inertial Poser, a real-time motion capture system that leverages a single stereo camera and six IMUs to estimate metric-accurate and shape-aware 3D human motion. By replacing the monocular RGB with stereo vision, our system resolves depth ambiguity through calibrated baseline geometry, enabling direct 3D keypoint extraction and body shape parameter estimation. IMU data and visual cues are fused for predicting drift-compensated joint positions and root movements, while a novel shape-aware fusion module dynamically harmonizes anthropometry variations with global translations. Our end-to-end pipeline achieves over 200 FPS without optimization-based post-processing, enabling real-time deployment. Quantitative evaluations across various datasets demonstrate state-of-the-art performance. Qualitative results show our method produces drift-free global translation under a long recording time and reduces foot-skating effects.