🤖 AI Summary
Current audio-driven talking face generation methods suffer from significant bottlenecks in lip-sync accuracy and visual realism. To address this, we propose a two-stage depth-aware editing framework: the first stage jointly leverages 3D Morphable Model (3DMM) reconstruction and audio-to-facial-motion mapping; the second stage employs a depth-map-guided GAN, where depth supervision explicitly enforces temporal lip alignment. To support Chinese-speaking scenarios, we curate a high-quality 130-hour Chinese talking face dataset and adopt joint training with HDTF and our proprietary data. Our method achieves state-of-the-art performance across all key metrics—reducing lip-sync error (LSE) and improving perceptual quality (FID, LPIPS)—while preserving identity consistency, facial expressiveness, and precise lip articulation. It enables end-to-end, high-fidelity, 3D-aware video editing.
📝 Abstract
Significant progress has been made in talking-face video generation research; however, precise lip-audio synchronization and high visual quality remain challenging in editing lip shapes based on input audio. This paper introduces JoyGen, a novel two-stage framework for talking-face generation, comprising audio-driven lip motion generation and visual appearance synthesis. In the first stage, a 3D reconstruction model and an audio2motion model predict identity and expression coefficients respectively. Next, by integrating audio features with a facial depth map, we provide comprehensive supervision for precise lip-audio synchronization in facial generation. Additionally, we constructed a Chinese talking-face dataset containing 130 hours of high-quality video. JoyGen is trained on the open-source HDTF dataset and our curated dataset. Experimental results demonstrate superior lip-audio synchronization and visual quality achieved by our method.