🤖 AI Summary
Achieving generalizable and real-time physics-driven 3D animation remains challenging, particularly when handling diverse geometries and discretization schemes. This work proposes PhysSkin, the first mesh-agnostic and discretization-agnostic neural skinning field autoencoder that maps control handle transformations to high-fidelity deformations. The method integrates a Transformer encoder with a cross-attention decoder, augmented by dynamic normalization and a conflict-aware gradient correction mechanism. It leverages physics-informed self-supervised learning to jointly optimize energy minimization, smoothness, and orthogonality constraints. Experiments demonstrate that PhysSkin enables efficient, real-time, and high-fidelity physics-driven animation across a wide range of 3D models, significantly improving both generalization capability and computational efficiency.
📝 Abstract
Achieving real-time physics-based animation that generalizes across diverse 3D shapes and discretizations remains a fundamental challenge. We introduce PhysSkin, a physics-informed framework that addresses this challenge. In the spirit of Linear Blend Skinning, we learn continuous skinning fields as basis functions lifting motion subspace coordinates to full-space deformation, with subspace defined by handle transformations. To generate mesh-free, discretization-agnostic, and physically consistent skinning fields that generalize well across diverse 3D shapes, PhysSkin employs a new neural skinning fields autoencoder which consists of a transformer-based encoder and a cross-attention decoder. Furthermore, we also develop a novel physics-informed self-supervised learning strategy that incorporates on-the-fly skinning-field normalization and conflict-aware gradient correction, enabling effective balancing of energy minimization, spatial smoothness, and orthogonality constraints. PhysSkin shows outstanding performance on generalizable neural skinning and enables real-time physics-based animation.