ProbTalk3D: Non-Deterministic Emotion Controllable Speech-Driven 3D Facial Animation Synthesis Using VQ-VAE

📅 2024-09-12
🏛️ Motion in Games
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing speech-driven 3D facial animation methods primarily focus on lip-sync accuracy and identity preservation, neglecting explicit emotional expression modeling and typically adopting deterministic architectures—leading to monotonous, low-diversity outputs. To address this, we propose the first non-deterministic generative framework enabling fine-grained control over both emotion categories and intensity levels. Built upon the 3DMEAD emotional dataset, our approach employs a two-stage VQ-VAE architecture incorporating emotion-conditioned modeling and stochastic latent sampling. We further introduce a comprehensive, multi-dimensional evaluation protocol—encompassing objective metrics, subjective user studies, and perceptual assessments—specifically designed for stochastic generation. Quantitative and qualitative experiments demonstrate that our method significantly outperforms state-of-the-art emotion-controllable and non-deterministic baselines in emotional fidelity, motion diversity, and visual naturalness. The code and dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Audio-driven 3D facial animation synthesis has been an active field of research with attention from both academia and industry. While there are promising results in this area, recent approaches largely focus on lip-sync and identity control, neglecting the role of emotions and emotion control in the generative process. That is mainly due to the lack of emotionally rich facial animation data and algorithms that can synthesize speech animations with emotional expressions at the same time. In addition, majority of the models are deterministic, meaning given the same audio input, they produce the same output motion. We argue that emotions and non-determinism are crucial to generate diverse and emotionally-rich facial animations. In this paper, we propose ProbTalk3D a non-deterministic neural network approach for emotion controllable speech-driven 3D facial animation synthesis using a two-stage VQ-VAE model and an emotionally rich facial animation dataset 3DMEAD. We provide an extensive comparative analysis of our model against the recent 3D facial animation synthesis approaches, by evaluating the results objectively, qualitatively, and with a perceptual user study. We highlight several objective metrics that are more suitable for evaluating stochastic outputs and use both in-the-wild and ground truth data for subjective evaluation. To our knowledge, that is the first non-deterministic 3D facial animation synthesis method incorporating a rich emotion dataset and emotion control with emotion labels and intensity levels. Our evaluation demonstrates that the proposed model achieves superior performance compared to state-of-the-art emotion-controlled, deterministic and non-deterministic models. We recommend watching the supplementary video for quality judgement. The entire codebase is publicly available (https://github.com/uuembodiedsocialai/ProbTalk3D/).
Problem

Research questions and friction points this paper is trying to address.

Emotion control in 3D facial animation synthesis
Non-deterministic speech-driven animation generation
Lack of emotionally rich facial animation datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-deterministic neural network
Emotion controllable synthesis
Two-stage VQ-VAE model
🔎 Similar Papers
2024-03-19IEEE Workshop/Winter Conference on Applications of Computer VisionCitations: 4
S
Sichun Wu
Utrecht University, Utrecht, The Netherlands
K
Kazi Injamamul Haque
Utrecht University, Utrecht, The Netherlands
Zerrin Yumak
Zerrin Yumak
Utrecht University
Interactive virtual characterssocial robotsartificial intelligencehuman-computer interaction