🤖 AI Summary
Existing approaches to modeling deformable objects struggle to simultaneously achieve physical accuracy, generalization capability, and data efficiency. This work proposes a novel paradigm that, for the first time, integrates differentiable Material Point Method (MPM) with online multi-view RGB-D sensory feedback. By minimizing the discrepancy between predicted and observed visual data, the method adaptively optimizes MPM parameters to jointly reconstruct an object’s geometry, appearance, and dynamics. The approach substantially improves data efficiency and generalization while preserving physical plausibility, significantly outperforming traditional mass-spring models in simulation fidelity. This provides a robust and efficient foundation for robotic dexterous manipulation of complex soft objects.
📝 Abstract
Modeling deformable objects - especially continuum materials - in a way that is physically plausible, generalizable, and data-efficient remains challenging across 3D vision, graphics, and robotic manipulation. Many existing methods oversimplify the rich dynamics of deformable objects or require large training sets, which often limits generalization. We introduce embodied MPM (EMPM), a deformable object modeling and simulation framework built on a differentiable Material Point Method (MPM) simulator that captures the dynamics of challenging materials. From multi-view RGB-D videos, our approach reconstructs geometry and appearance, then uses an MPM physics engine to simulate object behavior by minimizing the mismatch between predicted and observed visual data. We further optimize MPM parameters online using sensory feedback, enabling adaptive, robust, and physics-aware object representations that open new possibilities for robotic manipulation of complex deformables. Experiments show that EMPM outperforms spring-mass baseline models. Project website: https://embodied-mpm.github.io.