VASA-3D: Lifelike Audio-Driven Gaussian Head Avatars from a Single Image

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the dual challenge of reconstructing a high-fidelity 3D head model from a single portrait image and generating audio-driven dynamic facial expressions. We propose the first framework that transfers a 2D motion latent space—originally derived from VASA-1—into a 3D Gaussian splatting representation, enabling single-image optimization using synthetically generated video frames. To ensure fidelity, we introduce a tripartite consistency loss enforcing alignment across expression, geometry, and rendering, and adopt a video distillation training strategy. Our method supports free-viewpoint rendering and real-time inference (up to 75 FPS), producing 512×512-resolution 3D talking videos. Experiments demonstrate significant improvements over existing single-image reconstruction approaches in facial expression detail, geometric completeness, and visual realism. To our knowledge, this is the first end-to-end method achieving high-quality, audio-driven 3D Gaussian avatar generation directly from a single input image.

Technology Category

Application Category

📝 Abstract
We propose VASA-3D, an audio-driven, single-shot 3D head avatar generator. This research tackles two major challenges: capturing the subtle expression details present in real human faces, and reconstructing an intricate 3D head avatar from a single portrait image. To accurately model expression details, VASA-3D leverages the motion latent of VASA-1, a method that yields exceptional realism and vividness in 2D talking heads. A critical element of our work is translating this motion latent to 3D, which is accomplished by devising a 3D head model that is conditioned on the motion latent. Customization of this model to a single image is achieved through an optimization framework that employs numerous video frames of the reference head synthesized from the input image. The optimization takes various training losses robust to artifacts and limited pose coverage in the generated training data. Our experiment shows that VASA-3D produces realistic 3D talking heads that cannot be achieved by prior art, and it supports the online generation of 512x512 free-viewpoint videos at up to 75 FPS, facilitating more immersive engagements with lifelike 3D avatars.
Problem

Research questions and friction points this paper is trying to address.

Generates lifelike 3D talking heads from a single image and audio
Captures subtle facial expression details for realistic avatar animation
Reconstructs intricate 3D head avatars from limited single-image input
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages VASA-1 motion latent for realistic expression details
Translates motion latent to 3D via a conditioned head model
Customizes avatar from single image using optimization with synthesized frames
🔎 Similar Papers
No similar papers found.
Sicheng Xu
Sicheng Xu
Microsoft Research Asia
G
Guojun Chen
Microsoft Research Asia
Jiaolong Yang
Jiaolong Yang
Microsoft Research
3D Computer Vision
Y
Yizhong Zhang
Microsoft Research Asia
Y
Yu Deng
Microsoft Research Asia
S
Steve Lin
Microsoft Research Asia
Baining Guo
Baining Guo
Distinguished Scientist, Microsoft Research
Computer GraphicsGraphicsVirtual RealityGeometric Modeling