PhysTwin: Physics-Informed Reconstruction and Simulation of Deformable Objects from Videos

📅 2025-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of constructing physics-based digital twins for deformable objects—such as cloth, ropes, and plush toys—from sparse and occluded monocular video input. We propose PhysTwin, the first unified framework for video-driven deformable digital twin reconstruction. It jointly optimizes geometry recovery, inversion of dense physical properties (e.g., elasticity, damping), and photorealistic appearance reconstruction via a multi-stage differentiable pipeline integrating inverse physical modeling, implicit generative shape representation, and 3D Gaussian splatting. Our key contribution lies in the first synergistic integration of inverse dynamics, visual perception priors, and generative rendering—yielding significant improvements in reconstruction accuracy, rendering fidelity, future motion prediction, and simulation generalization under novel interactions. PhysTwin enables real-time interactive simulation and robot motion planning, demonstrating practical utility in robotic manipulation, extended reality (XR), and digital content creation.

Technology Category

Application Category

📝 Abstract
Creating a physical digital twin of a real-world object has immense potential in robotics, content creation, and XR. In this paper, we present PhysTwin, a novel framework that uses sparse videos of dynamic objects under interaction to produce a photo- and physically realistic, real-time interactive virtual replica. Our approach centers on two key components: (1) a physics-informed representation that combines spring-mass models for realistic physical simulation, generative shape models for geometry, and Gaussian splats for rendering; and (2) a novel multi-stage, optimization-based inverse modeling framework that reconstructs complete geometry, infers dense physical properties, and replicates realistic appearance from videos. Our method integrates an inverse physics framework with visual perception cues, enabling high-fidelity reconstruction even from partial, occluded, and limited viewpoints. PhysTwin supports modeling various deformable objects, including ropes, stuffed animals, cloth, and delivery packages. Experiments show that PhysTwin outperforms competing methods in reconstruction, rendering, future prediction, and simulation under novel interactions. We further demonstrate its applications in interactive real-time simulation and model-based robotic motion planning.
Problem

Research questions and friction points this paper is trying to address.

Reconstructs deformable objects from sparse videos
Infers physical properties for realistic simulation
Enables real-time interactive virtual replicas
Innovation

Methods, ideas, or system contributions that make the work stand out.

Physics-informed representation with spring-mass models
Multi-stage optimization-based inverse modeling framework
Integrates inverse physics with visual perception cues
🔎 Similar Papers
No similar papers found.