Single Image, Any Face: Generalisable 3D Face Generation

📅 2024-09-25
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of generalizing 3D face reconstruction from a single unconstrained in-the-wild image, this paper proposes the first multi-view consistent diffusion framework for cross-domain single-image 3D face reconstruction. Methodologically: (1) it introduces conditional mesh estimation—replacing ground-truth supervision—to establish geometric priors; (2) it designs a joint multi-view generation mechanism that enforces viewpoint consistency in the latent space; and (3) it integrates diffusion models with neural radiance fields (NeRF), enabling end-to-end training via synthesized multi-view data. Contributions include: establishing the first benchmark for generalizable single-image 3D face generation; achieving significant improvements over state-of-the-art methods in out-of-distribution (OOD) settings, while attaining SOTA performance on in-distribution (ID) tasks. Results exhibit high-fidelity appearance fidelity and strict geometric consistency across viewpoints.

Technology Category

Application Category

📝 Abstract
The creation of 3D human face avatars from a single unconstrained image is a fundamental task that underlies numerous real-world vision and graphics applications. Despite the significant progress made in generative models, existing methods are either less suited in design for human faces or fail to generalise from the restrictive training domain to unconstrained facial images. To address these limitations, we propose a novel model, Gen3D-Face, which generates 3D human faces with unconstrained single image input within a multi-view consistent diffusion framework. Given a specific input image, our model first produces multi-view images, followed by neural surface construction. To incorporate face geometry information in a generalisable manner, we utilise input-conditioned mesh estimation instead of ground-truth mesh along with synthetic multi-view training data. Importantly, we introduce a multi-view joint generation scheme to enhance appearance consistency among different views. To the best of our knowledge, this is the first attempt and benchmark for creating photorealistic 3D human face avatars from single images for generic human subject across domains. Extensive experiments demonstrate the superiority of our method over previous alternatives for out-of-domain singe image 3D face generation and top competition for in-domain setting.
Problem

Research questions and friction points this paper is trying to address.

Generates 3D human faces from single unconstrained images
Improves generalisation across diverse facial image domains
Ensures multi-view consistency in 3D face generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-view consistent diffusion framework
Input-conditioned mesh estimation
Multi-view joint generation scheme
🔎 Similar Papers