MANGO:Natural Multi-speaker 3D Talking Head Generation via 2D-Lifted Enhancement

📅 2026-01-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing audio-driven 3D talking head methods are largely confined to single-speaker scenarios and struggle to support natural bidirectional listen-speak interactions or seamless state transitions. To address this limitation, this work proposes MANGO—a two-stage training framework that achieves high-fidelity 3D multi-speaker conversational avatars using only image-level supervision, thereby circumventing the noise inherent in pseudo-3D labels. MANGO introduces a novel 2D-lifting-based alternating training mechanism that integrates dual-audio interaction modeling with 2D photometric supervision, incorporating a diffusion Transformer, a dual-audio interaction module, and an efficient 3D Gaussian renderer. Leveraging the newly curated MANGO-Dialog dataset—comprising over 500 identities and more than 50 hours of dialogue—the method significantly outperforms existing approaches in modeling dyadic 3D conversational dynamics, achieving notable advances in interaction naturalness and visual realism.

Technology Category

Application Category

📝 Abstract
Current audio-driven 3D head generation methods mainly focus on single-speaker scenarios, lacking natural, bidirectional listen-and-speak interaction. Achieving seamless conversational behavior, where speaking and listening states transition fluidly remains a key challenge. Existing 3D conversational avatar approaches rely on error-prone pseudo-3D labels that fail to capture fine-grained facial dynamics. To address these limitations, we introduce a novel two-stage framework MANGO, which leveraging pure image-level supervision by alternately training to mitigate the noise introduced by pseudo-3D labels, thereby achieving better alignment with real-world conversational behaviors. Specifically, in the first stage, a diffusion-based transformer with a dual-audio interaction module models natural 3D motion from multi-speaker audio. In the second stage, we use a fast 3D Gaussian Renderer to generate high-fidelity images and provide 2D-level photometric supervision for the 3D motions through alternate training. Additionally, we introduce MANGO-Dialog, a high-quality dataset with over 50 hours of aligned 2D-3D conversational data across 500+ identities. Extensive experiments demonstrate that our method achieves exceptional accuracy and realism in modeling two-person 3D dialogue motion, significantly advancing the fidelity and controllability of audio-driven talking heads.
Problem

Research questions and friction points this paper is trying to address.

audio-driven 3D talking head
multi-speaker interaction
conversational avatars
facial dynamics
3D head generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-speaker 3D talking head
2D-lifted supervision
diffusion-based transformer
dual-audio interaction
3D Gaussian rendering
🔎 Similar Papers
No similar papers found.