🤖 AI Summary
This work addresses key challenges in VR/AR head-mounted ego-facing binocular camera-based motion capture: severe self-occlusion, scarcity of real-world annotated data, and inaccurate lower-body pose estimation. To this end, we propose a geometry-aware multimodal fusion framework. First, we design a lightweight VR acquisition system integrated with floor-aligned geometric constraints, enabling the construction of the first large-scale real-world ego-facing motion dataset. Second, we introduce a novel training strategy grounded in motion geometry priors. Third, we employ an end-to-end lightweight neural network achieving real-time inference at 300 FPS. Evaluated on real-world scenarios, our method achieves state-of-the-art accuracy—reducing lower-body joint error by 32%—while significantly suppressing jitter and mesh penetration artifacts. The framework enables markerless, low-latency, high-fidelity virtual avatar driving.
📝 Abstract
Egocentric motion capture with a head-mounted body-facing stereo camera is crucial for VR and AR applications but presents significant challenges such as heavy occlusions and limited annotated real-world data. Existing methods rely on synthetic pretraining and struggle to generate smooth and accurate predictions in real-world settings, particularly for lower limbs. Our work addresses these limitations by introducing a lightweight VR-based data collection setup with on-board, real-time 6D pose tracking. Using this setup, we collected the most extensive real-world dataset for ego-facing ego-mounted cameras to date in size and motion variability. Effectively integrating this multimodal input -- device pose and camera feeds -- is challenging due to the differing characteristics of each data source. To address this, we propose FRAME, a simple yet effective architecture that combines device pose and camera feeds for state-of-the-art body pose prediction through geometrically sound multimodal integration and can run at 300 FPS on modern hardware. Lastly, we showcase a novel training strategy to enhance the model's generalization capabilities. Our approach exploits the problem's geometric properties, yielding high-quality motion capture free from common artifacts in prior works. Qualitative and quantitative evaluations, along with extensive comparisons, demonstrate the effectiveness of our method. Data, code, and CAD designs will be available at https://vcai.mpi-inf.mpg.de/projects/FRAME/