SyncTalk++: High-Fidelity and Efficient Synchronized Talking Heads Synthesis Using Gaussian Splatting

📅 2025-06-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of visual realism in speech-driven talking-head video synthesis—caused by asynchrony among identity, lip motion, facial expression, and head pose—this paper proposes a high-fidelity, real-time end-to-end framework. Methodologically: (1) we introduce a novel dynamic Gaussian rasterization renderer, integrated with 3D blendshape modeling and a facial synchronization controller, enabling fine-grained, temporally coherent lip and expression generation; (2) we incorporate a head-pose stabilizer and a temporal pose optimization module to ensure natural, smooth head motion; (3) we design an out-of-distribution (OOD)-robust expression generator and a torso refinement module to enhance cross-context generalization. Experiments demonstrate real-time rendering at 101 FPS, with superior performance over state-of-the-art methods in lip-sync accuracy, visual fidelity, and user preference.

Technology Category

Application Category

📝 Abstract
Achieving high synchronization in the synthesis of realistic, speech-driven talking head videos presents a significant challenge. A lifelike talking head requires synchronized coordination of subject identity, lip movements, facial expressions, and head poses. The absence of these synchronizations is a fundamental flaw, leading to unrealistic results. To address the critical issue of synchronization, identified as the ''devil'' in creating realistic talking heads, we introduce SyncTalk++, which features a Dynamic Portrait Renderer with Gaussian Splatting to ensure consistent subject identity preservation and a Face-Sync Controller that aligns lip movements with speech while innovatively using a 3D facial blendshape model to reconstruct accurate facial expressions. To ensure natural head movements, we propose a Head-Sync Stabilizer, which optimizes head poses for greater stability. Additionally, SyncTalk++ enhances robustness to out-of-distribution (OOD) audio by incorporating an Expression Generator and a Torso Restorer, which generate speech-matched facial expressions and seamless torso regions. Our approach maintains consistency and continuity in visual details across frames and significantly improves rendering speed and quality, achieving up to 101 frames per second. Extensive experiments and user studies demonstrate that SyncTalk++ outperforms state-of-the-art methods in synchronization and realism. We recommend watching the supplementary video: https://ziqiaopeng.github.io/synctalk++.
Problem

Research questions and friction points this paper is trying to address.

Achieving high synchronization in speech-driven talking head videos
Ensuring consistent identity, lip, expression, and pose synchronization
Enhancing robustness and speed in realistic talking head synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Portrait Renderer with Gaussian Splatting
Face-Sync Controller with 3D blendshape model
Head-Sync Stabilizer for natural head movements
Ziqiao Peng
Ziqiao Peng
Renmin University of China
3D Face AnimationTalking Head Generation
Wentao Hu
Wentao Hu
PhD student, The Hong Kong Polytechnic University
Large Language ModelComputer Vision
J
Junyuan Ma
Aerospace Information Research Institute, Chinese Academy of Sciences
X
Xiangyu Zhu
Institute of Automation, Chinese Academy of Sciences
X
Xiaomei Zhang
Institute of Automation, Chinese Academy of Sciences
H
Hao Zhao
Institute for AI Industry Research, Tsinghua University
H
Hui Tian
School of Information and Communication Engineering, Beijing University of Posts and Telecommunications
J
Jun He
School of Information, Renmin University of China
Hongyan Liu
Hongyan Liu
Zhejiang University
programable networksnetwork measurementP4 language
Z
Zhaoxin Fan
Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing, School of Artificial Intelligence, Beihang University, Hangzhou International Innovation Institute, Beihang University