🤖 AI Summary
To address the scarcity of real-world annotated data and limited pose/shape diversity in single-view RGB-D-based 3D human mesh reconstruction, this paper proposes M³ (Masked Mesh Modeling): a masked autoencoder framework for partial mesh completion, augmented by large-scale synthetic RGB-D training data generated from motion capture sequences. The method integrates template vertex matching, virtual camera projection, and depth-aware modeling to significantly improve reconstruction accuracy on low-cost RGB-D sensors. Evaluated on SURREAL and CAPE, M³ achieves per-vertex errors (PVE) of 16.8 mm and 22.0 mm, respectively—outperforming full-body point-cloud methods. On BEHAVE, it attains a PVE of 70.9 mm, reducing error by 18.4 mm over the state-of-the-art RGB-only approach. These results validate the effectiveness of leveraging depth information and synthetic data-driven training for robust 3D human mesh recovery.
📝 Abstract
Despite significant progress in 3D human mesh estimation from RGB images; RGBD cameras, offering additional depth data, remain underutilized. In this paper, we present a method for accurate 3D human mesh estimation from a single RGBD view, leveraging the affordability and widespread adoption of RGBD cameras for real-world applications. A fully supervised approach for this problem, requires a dataset with RGBD image and 3D mesh label pairs. However, collecting such a dataset is costly and challenging, hence, existing datasets are small, and limited in pose and shape diversity. To overcome this data scarcity, we leverage existing Motion Capture (MoCap) datasets. We first obtain complete 3D meshes from the body models found in MoCap datasets, and create partial, single-view versions of them by projection to a virtual camera. This simulates the depth data provided by an RGBD camera from a single viewpoint. Then, we train a masked autoencoder to complete the partial, single-view mesh. During inference, our method, which we name as M$^3$ for ``Masked Mesh Modeling'', matches the depth values coming from the sensor to vertices of a template human mesh, which creates a partial, single-view mesh. We effectively recover parts of the 3D human body mesh model that are not visible, resulting in a full body mesh. M$^3$ achieves 16.8 mm and 22.0 mm per-vertex-error (PVE) on the SURREAL and CAPE datasets, respectively; outperforming existing methods that use full-body point clouds as input. We obtain a competitive 70.9 PVE on the BEHAVE dataset, outperforming a recently published RGB based method by 18.4 mm, highlighting the usefulness of depth data. Code will be released.