🤖 AI Summary
Conventional linear blend skinning (LBS) is differentiable and computationally efficient but suffers from volume loss, unnatural deformations, and inability to model physics-based responses of soft tissues, hair, and other elastic materials.
Method: We propose a physics-driven, differentiable skinning framework that embeds rigid skeletons into deformable volumetric representations, coupling continuum mechanics with a particle–grid hybrid discretization scheme. Our approach enables joint optimization of material properties and skeletal motion and introduces material prototyping to drastically reduce learning complexity while preserving high representational capacity. It integrates soft-body dynamics simulation, an Eulerian background grid, and hyperelastic constitutive modeling.
Results: Evaluated on synthetic datasets, our method produces more realistic, physically plausible deformations and demonstrates strong generalization and practical utility in pose transfer and related tasks—overcoming fundamental limitations of LBS.
📝 Abstract
Skinning and rigging are fundamental components in animation, articulated object reconstruction, motion transfer, and 4D generation. Existing approaches predominantly rely on Linear Blend Skinning (LBS), due to its simplicity and differentiability. However, LBS introduces artifacts such as volume loss and unnatural deformations, and it fails to model elastic materials like soft tissues, fur, and flexible appendages (e.g., elephant trunks, ears, and fatty tissues). In this work, we propose PhysRig: a differentiable physics-based skinning and rigging framework that overcomes these limitations by embedding the rigid skeleton into a volumetric representation (e.g., a tetrahedral mesh), which is simulated as a deformable soft-body structure driven by the animated skeleton. Our method leverages continuum mechanics and discretizes the object as particles embedded in an Eulerian background grid to ensure differentiability with respect to both material properties and skeletal motion. Additionally, we introduce material prototypes, significantly reducing the learning space while maintaining high expressiveness. To evaluate our framework, we construct a comprehensive synthetic dataset using meshes from Objaverse, The Amazing Animals Zoo, and MixaMo, covering diverse object categories and motion patterns. Our method consistently outperforms traditional LBS-based approaches, generating more realistic and physically plausible results. Furthermore, we demonstrate the applicability of our framework in the pose transfer task highlighting its versatility for articulated object modeling.