🤖 AI Summary
This work addresses two key challenges in cloth simulation: modeling spatially varying mechanical properties and the computational expense and membrane locking artifacts inherent in finite element methods. To this end, we propose Mass-Spring Net—a differentiable mass-spring framework that directly infers spatially varying material parameters from motion observation data via a dual-objective loss function combining force and impulse residuals. Our approach circumvents numerical artifacts of traditional solvers, requires no explicit physical modeling or PDE solving, and achieves significantly higher training efficiency than graph neural networks and neural ODEs. It delivers high-fidelity dynamic reconstruction across multi-source datasets and exhibits strong generalization to unseen scenarios. The core contribution is the first integration of a differentiable mass-spring model with kinematically consistent, physics-informed losses—enabling efficient, robust, and interpretable surrogate modeling of spatially heterogeneous cloth.
📝 Abstract
Materials used in real clothing exhibit remarkable complexity and spatial variation due to common processes such as stitching, hemming, dyeing, printing, padding, and bonding. Simulating these materials, for instance using finite element methods, is often computationally demanding and slow. Worse, such methods can suffer from numerical artifacts called ``membrane locking'' that makes cloth appear artificially stiff. Here we propose a general framework, called Mass-Spring Net, for learning a simple yet efficient surrogate model that captures the effects of these complex materials using only motion observations. The cloth is discretized into a mass-spring network with unknown material parameters that are learned directly from the motion data, using a novel force-and-impulse loss function. Our approach demonstrates the ability to accurately model spatially varying material properties from a variety of data sources, and immunity to membrane locking which plagues FEM-based simulations. Compared to graph-based networks and neural ODE-based architectures, our method achieves significantly faster training times, higher reconstruction accuracy, and improved generalization to novel dynamic scenarios.