🤖 AI Summary
Traditional parametric human models (e.g., SMPL) rely on surface meshes, limiting efficient geometric interaction modeling between humans and scenes/objects; existing neural implicit volumetric models suffer from bottlenecks in joint robustness, computational efficiency, or memory overhead. This paper proposes VolumetricSMPL: a neural implicit volumetric representation integrating SMPL pose priors with differentiable signed distance functions (SDFs), augmented by Neural Blended Weights (NBW) that dynamically generate MLP parameters for shape- and pose-adaptive lightweight implicit modeling. The method enables high-fidelity contact modeling and self-intersection correction. Evaluated on human–object interaction reconstruction, scene-aware human recovery, and constrained motion synthesis, it significantly outperforms COAP—achieving 10× faster inference and 6× reduced GPU memory consumption—while maintaining superior accuracy, efficiency, and robustness.
📝 Abstract
Parametric human body models play a crucial role in computer graphics and vision, enabling applications ranging from human motion analysis to understanding human-environment interactions. Traditionally, these models use surface meshes, which pose challenges in efficiently handling interactions with other geometric entities, such as objects and scenes, typically represented as meshes or point clouds. To address this limitation, recent research has explored volumetric neural implicit body models. However, existing works are either insufficiently robust for complex human articulations or impose high computational and memory costs, limiting their widespread use. To this end, we introduce VolumetricSMPL, a neural volumetric body model that leverages Neural Blend Weights (NBW) to generate compact, yet efficient MLP decoders. Unlike prior approaches that rely on large MLPs, NBW dynamically blends a small set of learned weight matrices using predicted shape- and pose-dependent coefficients, significantly improving computational efficiency while preserving expressiveness. VolumetricSMPL outperforms prior volumetric occupancy model COAP with 10x faster inference, 6x lower GPU memory usage, enhanced accuracy, and a Signed Distance Function (SDF) for efficient and differentiable contact modeling. We demonstrate VolumetricSMPL's strengths across four challenging tasks: (1) reconstructing human-object interactions from in-the-wild images, (2) recovering human meshes in 3D scenes from egocentric views, (3) scene-constrained motion synthesis, and (4) resolving self-intersections. Our results highlight its broad applicability and significant performance and efficiency gains.