🤖 AI Summary
Existing speech-driven 3D talking head methods achieve high lip-sync accuracy but struggle to model individualized speaking styles, limiting visual realism and personalization. To address this, we propose a style-content dual disentanglement framework featuring the first cross-modal alignment mechanism operating simultaneously across spatial, temporal, and feature dimensions. Specifically, a latent-space style disentanglement encoder separates identity-specific speaking style from speech content; a graph attention network (GAT) coupled with cross-attention enables structured spatiotemporal modeling of facial dynamics; and a Top-k bidirectional contrastive loss jointly optimized with KL-divergence regularization ensures high-fidelity disentanglement. Evaluated on standard benchmarks, our method significantly outperforms state-of-the-art approaches—reducing lip-sync error (LSE) by 32%—and generates highly realistic, personalized, and temporally precise 3D talking head animations.
📝 Abstract
Speech-driven 3D talking head generation aims to produce lifelike facial animations precisely synchronized with speech. While considerable progress has been made in achieving high lip-synchronization accuracy, existing methods largely overlook the intricate nuances of individual speaking styles, which limits personalization and realism. In this work, we present a novel framework for personalized 3D talking head animation, namely "PTalker". This framework preserves speaking style through style disentanglement from audio and facial motion sequences and enhances lip-synchronization accuracy through a three-level alignment mechanism between audio and mesh modalities. Specifically, to effectively disentangle style and content, we design disentanglement constraints that encode driven audio and motion sequences into distinct style and content spaces to enhance speaking style representation. To improve lip-synchronization accuracy, we adopt a modality alignment mechanism incorporating three aspects: spatial alignment using Graph Attention Networks to capture vertex connectivity in the 3D mesh structure, temporal alignment using cross-attention to capture and synchronize temporal dependencies, and feature alignment by top-k bidirectional contrastive losses and KL divergence constraints to ensure consistency between speech and mesh modalities. Extensive qualitative and quantitative experiments on public datasets demonstrate that PTalker effectively generates realistic, stylized 3D talking heads that accurately match identity-specific speaking styles, outperforming state-of-the-art methods. The source code and supplementary videos are available at: PTalker.