TDMM-LM: Bridging Facial Understanding and Animation via Language Models

📅 2026-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the scarcity of large-scale text-motion paired data, a key bottleneck in facial animation research, by reframing facial parameter modeling as a language-centric problem. The authors propose a unified framework for text-driven facial animation and understanding, leveraging multiple generators to synthesize 80 hours of video. By integrating 3D facial parameter fitting with motion token quantization, they construct a large-scale dataset of aligned text–3D facial parameter pairs. A large language model is then trained to enable bidirectional mapping between motion and language—supporting both Motion2Language (describing facial actions) and Language2Motion (generating facial animations from text). Experimental results demonstrate that the approach exhibits strong generalization capabilities in both facial action understanding and generation, significantly improving the quality of text-guided facial animation.

Technology Category

Application Category

📝 Abstract
Text-guided human body animation has advanced rapidly, yet facial animation lags due to the scarcity of well-annotated, text-paired facial corpora. To close this gap, we leverage foundation generative models to synthesize a large, balanced corpus of facial behavior. We design prompts suite covering emotions and head motions, generate about 80 hours of facial videos with multiple generators, and fit per-frame 3D facial parameters, yielding large-scale (prompt and parameter) pairs for training. Building on this dataset, we probe language models for bidirectional competence over facial motion via two complementary tasks: (1) Motion2Language: given a sequence of 3D facial parameters, the model produces natural-language descriptions capturing content, style, and dynamics; and (2) Language2Motion: given a prompt, the model synthesizes the corresponding sequence of 3D facial parameters via quantized motion tokens for downstream animation. Extensive experiments show that in this setting language models can both interpret and synthesize facial motion with strong generalization. To best of our knowledge, this is the first work to cast facial-parameter modeling as a language problem, establishing a unified path for text-conditioned facial animation and motion understanding.
Problem

Research questions and friction points this paper is trying to address.

facial animation
text-guided animation
facial behavior corpus
language models
3D facial parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

facial animation
language models
3D facial parameters
text-to-motion
synthetic dataset
🔎 Similar Papers
2024-05-14IEEE/RJS International Conference on Intelligent RObots and SystemsCitations: 2