Avat3r: Large Animatable Gaussian Reconstruction Model for High-fidelity 3D Head Avatars

📅 2025-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional 3D head reconstruction relies on multi-view capture and test-time optimization, incurring high computational cost and limiting practical deployment. Method: We propose a novel method for reconstructing high-fidelity, animatable 3D head avatars from only a single or few input images. To this end, we pioneer the adaptation of large-scale 3D reconstruction models into explicitly animatable representations. Our approach employs a 3D Gaussian-based geometric representation, jointly fusing depth maps from DUSt3R and generic feature maps from Sapiens, and introduces an expression-conditioned cross-attention mechanism to enable natural facial animation. Contribution/Results: The method achieves superior robustness to challenging real-world inputs (e.g., smartphone photos, sculpture images) without requiring multi-view setups or test-time optimization. It outperforms state-of-the-art methods on few-shot and single-image reconstruction benchmarks while significantly reducing inference overhead—thereby advancing the practical adoption of digital humans in VFX and offline rendering pipelines.

Technology Category

Application Category

📝 Abstract
Traditionally, creating photo-realistic 3D head avatars requires a studio-level multi-view capture setup and expensive optimization during test-time, limiting the use of digital human doubles to the VFX industry or offline renderings. To address this shortcoming, we present Avat3r, which regresses a high-quality and animatable 3D head avatar from just a few input images, vastly reducing compute requirements during inference. More specifically, we make Large Reconstruction Models animatable and learn a powerful prior over 3D human heads from a large multi-view video dataset. For better 3D head reconstructions, we employ position maps from DUSt3R and generalized feature maps from the human foundation model Sapiens. To animate the 3D head, our key discovery is that simple cross-attention to an expression code is already sufficient. Finally, we increase robustness by feeding input images with different expressions to our model during training, enabling the reconstruction of 3D head avatars from inconsistent inputs, e.g., an imperfect phone capture with accidental movement, or frames from a monocular video. We compare Avat3r with current state-of-the-art methods for few-input and single-input scenarios, and find that our method has a competitive advantage in both tasks. Finally, we demonstrate the wide applicability of our proposed model, creating 3D head avatars from images of different sources, smartphone captures, single images, and even out-of-domain inputs like antique busts. Project website: https://tobias-kirschstein.github.io/avat3r/
Problem

Research questions and friction points this paper is trying to address.

Reduces compute for 3D head avatars.
Enables animation with simple cross-attention.
Improves robustness with varied training inputs.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Few-image 3D avatar reconstruction
Cross-attention for animation
Robust training with varied expressions
🔎 Similar Papers
No similar papers found.