EAvatar: Expression-Aware Head Avatar Reconstruction with Generative Geometry Priors

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D Gaussian Splatting (3DGS) methods struggle to simultaneously capture fine-grained facial expressions and ensure local texture continuity—particularly in highly deformable regions—during dynamic head reconstruction. To address this, we propose an expression- and deformation-aware 3D Gaussian lattice framework. Our method introduces sparse key Gaussians to drive localized deformations, enabling efficient and controllable expression modeling; it further integrates high-fidelity 3D priors from pretrained generative models to enhance geometric structure guidance and training stability. Through a generation-prior-guided sparse control and deformation propagation mechanism, our approach achieves superior detail fidelity and visual coherence while preserving texture consistency. Experiments demonstrate that our method outperforms state-of-the-art 3DGS and neural radiance field approaches in expression controllability, geometric accuracy, and rendering quality. This work establishes a new high-fidelity paradigm for dynamic head reconstruction, with broad applicability in AR/VR, gaming, and digital human content generation.

Technology Category

Application Category

📝 Abstract
High-fidelity head avatar reconstruction plays a crucial role in AR/VR, gaming, and multimedia content creation. Recent advances in 3D Gaussian Splatting (3DGS) have demonstrated effectiveness in modeling complex geometry with real-time rendering capability and are now widely used in high-fidelity head avatar reconstruction tasks. However, existing 3DGS-based methods still face significant challenges in capturing fine-grained facial expressions and preserving local texture continuity, especially in highly deformable regions. To mitigate these limitations, we propose a novel 3DGS-based framework termed EAvatar for head reconstruction that is both expression-aware and deformation-aware. Our method introduces a sparse expression control mechanism, where a small number of key Gaussians are used to influence the deformation of their neighboring Gaussians, enabling accurate modeling of local deformations and fine-scale texture transitions. Furthermore, we leverage high-quality 3D priors from pretrained generative models to provide a more reliable facial geometry, offering structural guidance that improves convergence stability and shape accuracy during training. Experimental results demonstrate that our method produces more accurate and visually coherent head reconstructions with improved expression controllability and detail fidelity.
Problem

Research questions and friction points this paper is trying to address.

Captures fine-grained facial expressions in head avatars
Preserves local texture continuity in deformable regions
Improves convergence stability and shape accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse expression control mechanism with key Gaussians
Generative geometry priors from pretrained models
Expression-aware deformation modeling for local details
🔎 Similar Papers
No similar papers found.