VolumetricSMPL: A Neural Volumetric Body Model for Efficient Interactions, Contacts, and Collisions

📅 2025-06-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional parametric human models (e.g., SMPL) rely on surface meshes, limiting efficient geometric interaction modeling between humans and scenes/objects; existing neural implicit volumetric models suffer from bottlenecks in joint robustness, computational efficiency, or memory overhead. This paper proposes VolumetricSMPL: a neural implicit volumetric representation integrating SMPL pose priors with differentiable signed distance functions (SDFs), augmented by Neural Blended Weights (NBW) that dynamically generate MLP parameters for shape- and pose-adaptive lightweight implicit modeling. The method enables high-fidelity contact modeling and self-intersection correction. Evaluated on human–object interaction reconstruction, scene-aware human recovery, and constrained motion synthesis, it significantly outperforms COAP—achieving 10× faster inference and 6× reduced GPU memory consumption—while maintaining superior accuracy, efficiency, and robustness.

Technology Category

Application Category

📝 Abstract
Parametric human body models play a crucial role in computer graphics and vision, enabling applications ranging from human motion analysis to understanding human-environment interactions. Traditionally, these models use surface meshes, which pose challenges in efficiently handling interactions with other geometric entities, such as objects and scenes, typically represented as meshes or point clouds. To address this limitation, recent research has explored volumetric neural implicit body models. However, existing works are either insufficiently robust for complex human articulations or impose high computational and memory costs, limiting their widespread use. To this end, we introduce VolumetricSMPL, a neural volumetric body model that leverages Neural Blend Weights (NBW) to generate compact, yet efficient MLP decoders. Unlike prior approaches that rely on large MLPs, NBW dynamically blends a small set of learned weight matrices using predicted shape- and pose-dependent coefficients, significantly improving computational efficiency while preserving expressiveness. VolumetricSMPL outperforms prior volumetric occupancy model COAP with 10x faster inference, 6x lower GPU memory usage, enhanced accuracy, and a Signed Distance Function (SDF) for efficient and differentiable contact modeling. We demonstrate VolumetricSMPL's strengths across four challenging tasks: (1) reconstructing human-object interactions from in-the-wild images, (2) recovering human meshes in 3D scenes from egocentric views, (3) scene-constrained motion synthesis, and (4) resolving self-intersections. Our results highlight its broad applicability and significant performance and efficiency gains.
Problem

Research questions and friction points this paper is trying to address.

Enhances human body model efficiency for interactions and collisions
Reduces computational and memory costs in volumetric neural models
Improves accuracy and speed in human-environment interaction tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural Blend Weights for efficient MLP decoders
Dynamic weight matrices blending for computational efficiency
Signed Distance Function for differentiable contact modeling
🔎 Similar Papers
No similar papers found.