🤖 AI Summary
Real-time hair simulation is critical for enhancing the realism and immersion of virtual characters; however, existing approaches are constrained by either the high computational cost of physics-based models or the inability of neural methods to capture dynamic motion (e.g., bouncing and swaying during jumping or walking), as they typically support only quasi-static modeling. This paper introduces the first fully self-supervised dynamic neural hair simulation framework, enabling end-to-end automatic reconstruction without manual annotations or artist intervention. Our method employs a lightweight strand-level neural network integrated with a physics-aware training scheme, achieving real-time dynamic simulation under low resource consumption. Experiments demonstrate significant improvements over state-of-the-art methods across diverse hairstyles, with superior stability, strong generalization to unseen motions and topologies, and practical viability for VR deployment.
📝 Abstract
Real-time hair simulation is a vital component in creating believable virtual avatars, as it provides a sense of immersion and authenticity. The dynamic behavior of hair, such as bouncing or swaying in response to character movements like jumping or walking, plays a significant role in enhancing the overall realism and engagement of virtual experiences. Current methods for simulating hair have been constrained by two primary approaches: highly optimized physics-based systems and neural methods. However, state-of-the-art neural techniques have been limited to quasi-static solutions, failing to capture the dynamic behavior of hair. This paper introduces a novel neural method that breaks through these limitations, achieving efficient and stable dynamic hair simulation while outperforming existing approaches. We propose a fully self-supervised method which can be trained without any manual intervention or artist generated training data allowing the method to be integrated with hair reconstruction methods to enable automatic end-to-end methods for avatar reconstruction. Our approach harnesses the power of compact, memory-efficient neural networks to simulate hair at the strand level, allowing for the simulation of diverse hairstyles without excessive computational resources or memory requirements. We validate the effectiveness of our method through a variety of hairstyle examples, showcasing its potential for real-world applications.