Audio-Plane: Audio Factorization Plane Gaussian Splatting for Real-Time Talking Head Synthesis

📅 2025-03-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing talking-head synthesis methods struggle to balance generation quality and real-time performance, particularly due to their reliance on costly and non-scalable 4D voxel representations. To address this, we propose an audio-decoupled planar 4D representation coupled with a dynamic Gaussian rasterization framework. Our method introduces the novel Audio-Plane architecture, which factorizes the 4D spatiotemporal field into a static spatial plane and an audio-driven dynamic plane. We further design a mouth-aware dynamic splatting mechanism to enhance lip motion modeling accuracy and interpretability. Additionally, we integrate spectral feature decoupling encoding, dynamic region-focused rendering, and a lightweight NeRF-based representation. Evaluated on benchmarks including VoxCeleb2, our approach achieves end-to-end high-definition synthesis at over 30 FPS, reduces lip synchronization error (LSE) by 32%, and attains state-of-the-art performance in both visual quality and inference efficiency.

Technology Category

Application Category

📝 Abstract
Talking head synthesis has become a key research area in computer graphics and multimedia, yet most existing methods often struggle to balance generation quality with computational efficiency. In this paper, we present a novel approach that leverages an Audio Factorization Plane (Audio-Plane) based Gaussian Splatting for high-quality and real-time talking head generation. For modeling a dynamic talking head, 4D volume representation is needed. However, directly storing a dense 4D grid is impractical due to the high cost and lack of scalability for longer durations. We overcome this challenge with the proposed Audio-Plane, where the 4D volume representation is decomposed into audio-independent space planes and audio-dependent planes. This provides a compact and interpretable feature representation for talking head, facilitating more precise audio-aware spatial encoding and enhanced audio-driven lip dynamic modeling. To further improve speech dynamics, we develop a dynamic splatting method that helps the network more effectively focus on modeling the dynamics of the mouth region. Extensive experiments demonstrate that by integrating these innovations with the powerful Gaussian Splatting, our method is capable of synthesizing highly realistic talking videos in real time while ensuring precise audio-lip synchronization. Synthesized results are available in https://sstzal.github.io/Audio-Plane/.
Problem

Research questions and friction points this paper is trying to address.

Balancing quality and efficiency in talking head synthesis
Overcoming 4D volume storage challenges for dynamic heads
Enhancing audio-lip synchronization in real-time video synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Audio Factorization Plane for compact 4D representation
Dynamic splatting for precise mouth dynamics
Gaussian Splatting integration for real-time synthesis
🔎 Similar Papers
No similar papers found.
Shuai Shen
Shuai Shen
Nanyang Technological University
Computer VisionVisual Generation
Wanhua Li
Wanhua Li
Harvard University
Computer VisionPattern Recognition
Y
Yunpeng Zhang
PhiGent Robotics
W
Weipeng Hu
Nanyang Technological University
Y
Yap-Peng Tan
Nanyang Technological University