JoyGen: Audio-Driven 3D Depth-Aware Talking-Face Video Editing

📅 2025-01-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current audio-driven talking face generation methods suffer from significant bottlenecks in lip-sync accuracy and visual realism. To address this, we propose a two-stage depth-aware editing framework: the first stage jointly leverages 3D Morphable Model (3DMM) reconstruction and audio-to-facial-motion mapping; the second stage employs a depth-map-guided GAN, where depth supervision explicitly enforces temporal lip alignment. To support Chinese-speaking scenarios, we curate a high-quality 130-hour Chinese talking face dataset and adopt joint training with HDTF and our proprietary data. Our method achieves state-of-the-art performance across all key metrics—reducing lip-sync error (LSE) and improving perceptual quality (FID, LPIPS)—while preserving identity consistency, facial expressiveness, and precise lip articulation. It enables end-to-end, high-fidelity, 3D-aware video editing.

Technology Category

Application Category

📝 Abstract
Significant progress has been made in talking-face video generation research; however, precise lip-audio synchronization and high visual quality remain challenging in editing lip shapes based on input audio. This paper introduces JoyGen, a novel two-stage framework for talking-face generation, comprising audio-driven lip motion generation and visual appearance synthesis. In the first stage, a 3D reconstruction model and an audio2motion model predict identity and expression coefficients respectively. Next, by integrating audio features with a facial depth map, we provide comprehensive supervision for precise lip-audio synchronization in facial generation. Additionally, we constructed a Chinese talking-face dataset containing 130 hours of high-quality video. JoyGen is trained on the open-source HDTF dataset and our curated dataset. Experimental results demonstrate superior lip-audio synchronization and visual quality achieved by our method.
Problem

Research questions and friction points this paper is trying to address.

Audio-Visual Synchronization
Lip Movement Matching
Video Realism
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D Depth Perception
Lip-Sync Accuracy
Audio-Driven Facial Animation
🔎 Similar Papers
No similar papers found.
Q
Qili Wang
JD.Com, Inc.
D
Dajiang Wu
JD.Com, Inc.
Z
Zihang Xu
The University of Hong Kong
Junshi Huang
Junshi Huang
Meituan
Computer VisionNLPMachine Learning
Jun Lv
Jun Lv
Shanghai Jiao Tong University
Embodied AIRobot LearningArtificial Intelligence