Learning Disentangled Speech- and Expression-Driven Blendshapes for 3D Talking Face Animation

πŸ“… 2025-10-29
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the challenge of co-modeling lip motion and emotional expression due to the scarcity of real-world 3D talking-face data with authentic emotions, this paper proposes a speech- and expression-decoupled method for generating 3D facial animation. The core contribution lies in modeling speech- and emotion-related blendshapes as a linearly additive system, augmented with a sparsity-constrained loss to effectively decouple their parameter representations while preserving natural facial deformations. Our method is jointly trained on VOCAset and Florence4D, and the learned blendshapes are mapped to FLAME’s expression and jaw pose parameters. Experiments demonstrate that the approach achieves high-fidelity lip synchronization while significantly improving the naturalness and perceptual discriminability of target emotions. A user study confirms its superiority over current state-of-the-art methods in subjective evaluation.

Technology Category

Application Category

πŸ“ Abstract
Expressions are fundamental to conveying human emotions. With the rapid advancement of AI-generated content (AIGC), realistic and expressive 3D facial animation has become increasingly crucial. Despite recent progress in speech-driven lip-sync for talking-face animation, generating emotionally expressive talking faces remains underexplored. A major obstacle is the scarcity of real emotional 3D talking-face datasets due to the high cost of data capture. To address this, we model facial animation driven by both speech and emotion as a linear additive problem. Leveraging a 3D talking-face dataset with neutral expressions (VOCAset) and a dataset of 3D expression sequences (Florence4D), we jointly learn a set of blendshapes driven by speech and emotion. We introduce a sparsity constraint loss to encourage disentanglement between the two types of blendshapes while allowing the model to capture inherent secondary cross-domain deformations present in the training data. The learned blendshapes can be further mapped to the expression and jaw pose parameters of the FLAME model, enabling the animation of 3D Gaussian avatars. Qualitative and quantitative experiments demonstrate that our method naturally generates talking faces with specified expressions while maintaining accurate lip synchronization. Perceptual studies further show that our approach achieves superior emotional expressivity compared to existing methods, without compromising lip-sync quality.
Problem

Research questions and friction points this paper is trying to address.

Generating emotionally expressive 3D talking faces with accurate lip synchronization
Addressing scarcity of real emotional 3D talking-face datasets through joint learning
Disentangling speech- and expression-driven facial animations using sparsity constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learns disentangled speech and expression blendshapes jointly
Uses sparsity loss to separate cross-domain facial deformations
Maps blendshapes to FLAME model for 3D avatar animation
πŸ”Ž Similar Papers
No similar papers found.
Y
Yuxiang Mao
Institute of Computing Technology, Chinese Academy of Sciences
Zhijie Zhang
Zhijie Zhang
Fudan University
spatial epidemiologymedical statistical methodsclinical trialbioinformatics
Z
Zhiheng Zhang
Institute of Computing Technology, Chinese Academy of Sciences
J
Jiawei Liu
Huadian (Beijing) Co-Generation Co., Ltd.
C
Chen Zeng
Huadian (Beijing) Co-Generation Co., Ltd.
S
Shihong Xia
Institute of Computing Technology, Chinese Academy of Sciences