🤖 AI Summary
Physical simulation requires spatially varying mechanical parameters (Young’s modulus *E*, Poisson’s ratio *ν*, density *ρ*), yet manual annotation is labor-intensive and lacks generalizability. To address this, we propose VoMP—a novel framework for automatic, voxel-level prediction of mechanical property fields on arbitrary renderable 3D objects. Our method introduces a Geometry Transformer that jointly encodes multi-view geometric features and material manifold latent representations, augmented by vision-language models to enable the first large-scale, voxel-level mechanical property annotation pipeline and benchmark. Leveraging real-material-data-driven manifold learning and physics-informed constraints, VoMP ensures physically plausible and spatially faithful predictions. Experiments demonstrate that VoMP outperforms state-of-the-art methods in both accuracy and inference speed, establishing a scalable, spatially varying material modeling paradigm for high-fidelity physical simulation.
📝 Abstract
Physical simulation relies on spatially-varying mechanical properties, often laboriously hand-crafted. VoMP is a feed-forward method trained to predict Young's modulus ($E$), Poisson's ratio ($ν$), and density ($ρ$) throughout the volume of 3D objects, in any representation that can be rendered and voxelized. VoMP aggregates per-voxel multi-view features and passes them to our trained Geometry Transformer to predict per-voxel material latent codes. These latents reside on a manifold of physically plausible materials, which we learn from a real-world dataset, guaranteeing the validity of decoded per-voxel materials. To obtain object-level training data, we propose an annotation pipeline combining knowledge from segmented 3D datasets, material databases, and a vision-language model, along with a new benchmark. Experiments show that VoMP estimates accurate volumetric properties, far outperforming prior art in accuracy and speed.