🤖 AI Summary
Existing neural parametric head models (NPHMs) suffer from poor geometric fidelity, weak generalization, and computational inefficiency when fitted to single images. To address this, we propose the first end-to-end regression framework for direct NPHM parameter estimation from a single image. Our method employs a domain-specific Vision Transformer (ViT) backbone, jointly trained with signed distance function (SDF) space ground-truth supervision and surface normal pseudo-labels, and further refines geometry via differentiable optimization at inference time. This enables the first direct regression from a single image to full NPHM parameters—bypassing iterative optimization and overcoming longstanding fitting bottlenecks. The framework maintains real-time inference speed (>30 FPS) while significantly improving 3D facial geometric accuracy and expression dynamics fidelity, enabling scalable deployment on in-the-wild videos. Extensive experiments demonstrate state-of-the-art 3D reconstruction quality, outperforming mainstream 3D morphable models (3DMMs) and NeRF-based baselines—particularly in high-frequency detail recovery and cross-domain generalization.
📝 Abstract
Neural Parametric Head Models (NPHMs) are a recent advancement over mesh-based 3d morphable models (3DMMs) to facilitate high-fidelity geometric detail. However, fitting NPHMs to visual inputs is notoriously challenging due to the expressive nature of their underlying latent space. To this end, we propose Pix2NPHM, a vision transformer (ViT) network that directly regresses NPHM parameters, given a single image as input. Compared to existing approaches, the neural parametric space allows our method to reconstruct more recognizable facial geometry and accurate facial expressions. For broad generalization, we exploit domain-specific ViTs as backbones, which are pretrained on geometric prediction tasks. We train Pix2NPHM on a mixture of 3D data, including a total of over 100K NPHM registrations that enable direct supervision in SDF space, and large-scale 2D video datasets, for which normal estimates serve as pseudo ground truth geometry. Pix2NPHM not only allows for 3D reconstructions at interactive frame rates, it is also possible to improve geometric fidelity by a subsequent inference-time optimization against estimated surface normals and canonical point maps. As a result, we achieve unprecedented face reconstruction quality that can run at scale on in-the-wild data.