PTalker: Personalized Speech-Driven 3D Talking Head Animation via Style Disentanglement and Modality Alignment

📅 2025-12-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing speech-driven 3D talking head methods achieve high lip-sync accuracy but struggle to model individualized speaking styles, limiting visual realism and personalization. To address this, we propose a style-content dual disentanglement framework featuring the first cross-modal alignment mechanism operating simultaneously across spatial, temporal, and feature dimensions. Specifically, a latent-space style disentanglement encoder separates identity-specific speaking style from speech content; a graph attention network (GAT) coupled with cross-attention enables structured spatiotemporal modeling of facial dynamics; and a Top-k bidirectional contrastive loss jointly optimized with KL-divergence regularization ensures high-fidelity disentanglement. Evaluated on standard benchmarks, our method significantly outperforms state-of-the-art approaches—reducing lip-sync error (LSE) by 32%—and generates highly realistic, personalized, and temporally precise 3D talking head animations.

Technology Category

Application Category

📝 Abstract
Speech-driven 3D talking head generation aims to produce lifelike facial animations precisely synchronized with speech. While considerable progress has been made in achieving high lip-synchronization accuracy, existing methods largely overlook the intricate nuances of individual speaking styles, which limits personalization and realism. In this work, we present a novel framework for personalized 3D talking head animation, namely "PTalker". This framework preserves speaking style through style disentanglement from audio and facial motion sequences and enhances lip-synchronization accuracy through a three-level alignment mechanism between audio and mesh modalities. Specifically, to effectively disentangle style and content, we design disentanglement constraints that encode driven audio and motion sequences into distinct style and content spaces to enhance speaking style representation. To improve lip-synchronization accuracy, we adopt a modality alignment mechanism incorporating three aspects: spatial alignment using Graph Attention Networks to capture vertex connectivity in the 3D mesh structure, temporal alignment using cross-attention to capture and synchronize temporal dependencies, and feature alignment by top-k bidirectional contrastive losses and KL divergence constraints to ensure consistency between speech and mesh modalities. Extensive qualitative and quantitative experiments on public datasets demonstrate that PTalker effectively generates realistic, stylized 3D talking heads that accurately match identity-specific speaking styles, outperforming state-of-the-art methods. The source code and supplementary videos are available at: PTalker.
Problem

Research questions and friction points this paper is trying to address.

Generates personalized 3D talking head animations from speech
Disentangles speaking style from audio and facial motion for realism
Aligns audio and 3D mesh modalities to improve lip-synchronization accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Style disentanglement from audio and motion sequences
Three-level alignment mechanism for lip-synchronization
Graph Attention Networks and cross-attention for modality alignment
🔎 Similar Papers
No similar papers found.
B
Bin Wang
College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
Y
Yang Xu
College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
H
Huan Zhao
College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
H
Hao Zhang
School of Electronic Information, Central South University, Changsha, China
Zixing Zhang
Zixing Zhang
Professor, Hunan University
Artifical IntelligenceSpeech ProcessingAffective ComputingDigital HealthAutomatic Speech Recognition