🤖 AI Summary
To address the challenges of deformation control and limited editability in editable, animatable 3D avatar generation, this paper proposes Editable Gaussian Head (EG-Head)—the first explicit representation framework that integrates 3D Gaussian Splatting (3DGS) into a 3D-aware GAN. Methodologically: (1) it combines 3D Morphable Model (3DMM) priors with UV texture maps to enable identity-preserving and precise facial expression control; (2) it introduces auxiliary 3DGS representations alongside triplane features to jointly model complex non-rigid structures (e.g., hair); and (3) it supports multi-granularity editing—including illumination, expression, and texture. Experiments demonstrate that EG-Head achieves state-of-the-art performance in both 3D consistency and edit controllability, significantly outperforming implicit field-based approaches. The code and pretrained models are publicly released.
📝 Abstract
Generating animatable and editable 3D head avatars is essential for various applications in computer vision and graphics. Traditional 3D-aware generative adversarial networks (GANs), often using implicit fields like Neural Radiance Fields (NeRF), achieve photorealistic and view-consistent 3D head synthesis. However, these methods face limitations in deformation flexibility and editability, hindering the creation of lifelike and easily modifiable 3D heads. We propose a novel approach that enhances the editability and animation control of 3D head avatars by incorporating 3D Gaussian Splatting (3DGS) as an explicit 3D representation. This method enables easier illumination control and improved editability. Central to our approach is the Editable Gaussian Head (EG-Head) model, which combines a 3D Morphable Model (3DMM) with texture maps, allowing precise expression control and flexible texture editing for accurate animation while preserving identity. To capture complex non-facial geometries like hair, we use an auxiliary set of 3DGS and tri-plane features. Extensive experiments demonstrate that our approach delivers high-quality 3D-aware synthesis with state-of-the-art controllability. Our code and models are available at https://github.com/liguohao96/EGG3D.