🤖 AI Summary
Existing monocular human motion reconstruction methods struggle to achieve accurate 3D spatial localization of humans within real-world scenes, limiting realistic interaction in virtual reality, augmented reality, and embodied AI. To address this, we propose a scene-human joint optimization framework that takes only a monocular RGB video and a static camera as input. Our method jointly estimates human mesh pose, foreground segmentation, and keyframe scene point clouds, then explicitly aligns the human mesh with scene geometry while enforcing root-joint position consistency across non-keyframes. Crucially, it requires no depth sensors or motion-capture systems. Evaluated on multiple public benchmarks and real-world web videos, our approach significantly outperforms state-of-the-art methods: it reduces 3D human localization error by 18.7% and improves scene-geometry consistency by 23.4%.
📝 Abstract
Animating realistic character interactions with the surrounding environment is important for autonomous agents in gaming, AR/VR, and robotics. However, current methods for human motion reconstruction struggle with accurately placing humans in 3D space. We introduce Scene-Human Aligned REconstruction (SHARE), a technique that leverages the scene geometry's inherent spatial cues to accurately ground human motion reconstruction. Each reconstruction relies solely on a monocular RGB video from a stationary camera. SHARE first estimates a human mesh and segmentation mask for every frame, alongside a scene point map at keyframes. It iteratively refines the human's positions at these keyframes by comparing the human mesh against the human point map extracted from the scene using the mask. Crucially, we also ensure that non-keyframe human meshes remain consistent by preserving their relative root joint positions to keyframe root joints during optimization. Our approach enables more accurate 3D human placement while reconstructing the surrounding scene, facilitating use cases on both curated datasets and in-the-wild web videos. Extensive experiments demonstrate that SHARE outperforms existing methods.