🤖 AI Summary
Existing markerless monocular human motion capture methods struggle to accurately reconstruct fine-grained foot motion due to noisy foot annotations and insufficient motion diversity. To address this, we propose FootMR, a method that bypasses direct reliance on image inputs by performing residual optimization from 2D foot keypoint sequences to 3D motion, leveraging large-scale motion capture data for enhanced accuracy. FootMR incorporates knee–foot motion context to alleviate ambiguities inherent in 2D-to-3D lifting, employs global joint rotation representations, and utilizes strong data augmentation to significantly improve generalization to extreme foot poses. We also introduce MOOF, the first 2D evaluation dataset featuring complex foot motions. Experiments demonstrate that FootMR outperforms existing approaches on MOOF, MOYO, and RICH, reducing ankle joint angle error by up to 30% on MOYO.
📝 Abstract
State-of-the-art methods can recover accurate overall 3D human body motion from in-the-wild videos. However, they often fail to capture fine-grained articulations, especially in the feet, which are critical for applications such as gait analysis and animation. This limitation results from training datasets with inaccurate foot annotations and limited foot motion diversity. We address this gap with FootMR, a Foot Motion Refinement method that refines foot motion estimated by an existing human recovery model through lifting 2D foot keypoint sequences to 3D. By avoiding direct image input, FootMR circumvents inaccurate image-3D annotation pairs and can instead leverage large-scale motion capture data. To resolve ambiguities of 2D-to-3D lifting, FootMR incorporates knee and foot motion as context and predicts only residual foot motion. Generalization to extreme foot poses is further improved by representing joints in global rather than parent-relative rotations and applying extensive data augmentation. To support evaluation of foot motion reconstruction, we introduce MOOF, a 2D dataset of complex foot movements. Experiments on MOOF, MOYO, and RICH show that FootMR outperforms state-of-the-art methods, reducing ankle joint angle error on MOYO by up to 30% over the best video-based approach.