DGH: Dynamic Gaussian Hair

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Dynamic hair modeling faces three core challenges: highly complex motion, severe self-occlusion, and intricate light scattering. Existing approaches—either physics-based simulation or static capture—suffer from poor generalization, high computational cost, and heavy reliance on manual parameter tuning. This paper proposes the first end-to-end, data-driven framework for dynamic strand reconstruction. Our method (1) introduces a coarse-to-fine, temporally consistent motion modeling mechanism; (2) employs strand-guided dynamic 3D Gaussian representations; and (3) establishes the first differentiable rendering-enabled Gaussian optimization pipeline for dynamic hair. Fully decoupled from physics simulation, our approach achieves strong generalization across diverse hairstyles and head motions. It significantly outperforms state-of-the-art methods in both geometric accuracy and visual fidelity, supports real-time rendering and high-fidelity animation, and integrates seamlessly into 3D Gaussian-based digital human systems.

Technology Category

Application Category

📝 Abstract
The creation of photorealistic dynamic hair remains a major challenge in digital human modeling because of the complex motions, occlusions, and light scattering. Existing methods often resort to static capture and physics-based models that do not scale as they require manual parameter fine-tuning to handle the diversity of hairstyles and motions, and heavy computation to obtain high-quality appearance. In this paper, we present Dynamic Gaussian Hair (DGH), a novel framework that efficiently learns hair dynamics and appearance. We propose: (1) a coarse-to-fine model that learns temporally coherent hair motion dynamics across diverse hairstyles; (2) a strand-guided optimization module that learns a dynamic 3D Gaussian representation for hair appearance with support for differentiable rendering, enabling gradient-based learning of view-consistent appearance under motion. Unlike prior simulation-based pipelines, our approach is fully data-driven, scales with training data, and generalizes across various hairstyles and head motion sequences. Additionally, DGH can be seamlessly integrated into a 3D Gaussian avatar framework, enabling realistic, animatable hair for high-fidelity avatar representation. DGH achieves promising geometry and appearance results, providing a scalable, data-driven alternative to physics-based simulation and rendering.
Problem

Research questions and friction points this paper is trying to address.

Efficiently learns dynamic hair motion and appearance
Generalizes across diverse hairstyles and head motions
Provides scalable data-driven alternative to physics simulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Coarse-to-fine model for temporally coherent hair motion
Strand-guided optimization for dynamic 3D Gaussian representation
Data-driven framework generalizing across hairstyles and motions
🔎 Similar Papers
No similar papers found.