PhysHead: Simulation-Ready Gaussian Head Avatars

πŸ“… 2026-04-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing digital avatar methods struggle to realistically model dynamic hair, often approximating it as a rigid shell and neglecting its volumetric structure and physical behavior. This work proposes a hybrid architecture that integrates a 3D parametric head mesh with a strand-based, physically simulatable hair representation, and for the first time unifies strand-level dynamics with a 3D Gaussian splatting appearance model. Trained on multi-view videos and enhanced by a vision-language model to infer the appearance of occluded regions, the method enables realistic hair motion driven by facial expressions, viewpoint changes, and external forces such as wind. Experiments demonstrate that the approach outperforms current baselines both qualitatively in visual fidelity and quantitatively across standard metrics.
πŸ“ Abstract
Realistic digital avatars require expressive and dynamic hair motion; however, most existing head avatar methods assume rigid hair movement. These methods often fail to disentangle hair from the head, representing it as a simple outer shell and failing to capture its natural volumetric behavior. In this paper, we address these limitations by introducing PhysHead, a hybrid representation for animatable head avatars with realistic hair dynamics learned from multi-view video. At the core is a 3D Gaussian-based layered representation of the head. Our approach combines a 3D parametric mesh for the head with strand-based hair, which can be directly simulated using physics engines. For the appearance model, we employ Gaussian primitives attached to both the head mesh and hair segments. This representation enables the creation of photorealistic head avatars with dynamic hair behavior, such as wind-blown motion, overcoming the constraints of rigid hair in existing methods. However, these animation capabilities also require new training schemes. In particular, we propose the use of VLM-based models to generate appearance of regions that are occluded in the dynamic training sequences. In quantitative and qualitative studies, we demonstrate the capabilities of the proposed model and compare it with existing baselines. We show that our method can synthesize physically plausible hair motion besides expression and camera control.
Problem

Research questions and friction points this paper is trying to address.

head avatars
hair dynamics
rigid hair
volumetric behavior
dynamic motion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gaussian Avatars
Dynamic Hair Simulation
Physics-Based Animation
Multi-view Video Learning
Occlusion Completion with VLM
πŸ”Ž Similar Papers
No similar papers found.