🤖 AI Summary
To address the computational bottleneck hindering high-fidelity hair simulation in real-time virtual human applications, this paper proposes a neural hair motion modeling framework generalizable to arbitrary poses, body shapes, and hairstyles. Methodologically, we introduce the first self-supervised training paradigm that eliminates reliance on costly ground-truth physics-based simulation data, integrating quasi-static physical priors with lightweight spatiotemporal consistency constraints. Our approach achieves millisecond-scale single-frame inference and processes one thousand hairstyles in 0.3 seconds. On consumer-grade GPUs, per-frame latency is only a few milliseconds—substantially outperforming traditional physics-based solvers. The model exhibits strong generalization across large pose, body shape, and hairstyle variations, while ensuring both temporal coherence and physically plausible dynamics. This work delivers the first efficient, general-purpose, and physically credible neural hair simulation solution tailored for real-time virtual human systems.
📝 Abstract
Realistic hair motion is crucial for high-quality avatars, but it is often limited by the computational resources available for real-time applications. To address this challenge, we propose a novel neural approach to predict physically plausible hair deformations that generalizes to various body poses, shapes, and hairstyles. Our model is trained using a self-supervised loss, eliminating the need for expensive data generation and storage. We demonstrate our method's effectiveness through numerous results across a wide range of pose and shape variations, showcasing its robust generalization capabilities and temporally smooth results. Our approach is highly suitable for real-time applications with an inference time of only a few milliseconds on consumer hardware and its ability to scale to predicting the drape of 1000 grooms in 0.3 seconds.