🤖 AI Summary
3D human mesh reconstruction from in-the-wild images suffers from inaccurate orientation estimation in the world coordinate system, primarily due to the absence of ground-truth camera rotation—especially pitch angle—leading to substantial errors under the common zero-rotation assumption. To address this, we propose a human-centered strategy that estimates camera pitch solely from RGB images and synthetic depth maps. We further introduce a plug-and-play Mesh-Plug module that jointly optimizes root joint orientation and full-body pose. Additionally, we design a camera rotation prediction network grounded in human spatial configuration. Our method achieves significant improvements over state-of-the-art approaches on the SPEC-SYN and SPEC-MTP benchmarks, enabling more accurate and robust world-coordinate human mesh reconstruction without requiring real camera calibration.
📝 Abstract
Reconstructing accurate 3D human meshes in the world coordinate system from in-the-wild images remains challenging due to the lack of camera rotation information. While existing methods achieve promising results in the camera coordinate system by assuming zero camera rotation, this simplification leads to significant errors when transforming the reconstructed mesh to the world coordinate system. To address this challenge, we propose Mesh-Plug, a plug-and-play module that accurately transforms human meshes from camera coordinates to world coordinates. Our key innovation lies in a human-centered approach that leverages both RGB images and depth maps rendered from the initial mesh to estimate camera rotation parameters, eliminating the dependency on environmental cues. Specifically, we first train a camera rotation prediction module that focuses on the human body's spatial configuration to estimate camera pitch angle. Then, by integrating the predicted camera parameters with the initial mesh, we design a mesh adjustment module that simultaneously refines the root joint orientation and body pose. Extensive experiments demonstrate that our framework outperforms state-of-the-art methods on the benchmark datasets SPEC-SYN and SPEC-MTP.