AU Codes, Language, and Synthesis: Translating Anatomy to Text for Facial Behavior Synthesis

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing facial behavior synthesis methods rely on coarse-grained emotion labels or linear combinations of action units (AUs), struggling to model anatomically plausible complex expressions and often yielding unnatural results. This work proposes a text-guided, high-fidelity facial expression generation framework by first translating AUs into natural language descriptions. The key contributions include constructing BP4D-AUText, the first large-scale dataset of paired AUs and textual descriptions; designing a rule-based dynamic AU text processor; and introducing VQ-AUFace, a generative model that integrates facial structural priors with modern text-to-image techniques to enhance anatomical plausibility. Experimental results demonstrate that the proposed approach significantly outperforms existing methods in both quantitative metrics and user studies, particularly excelling at synthesizing realistic, expressive, and perceptually credible facial behaviors when handling conflicting AUs.

Technology Category

Application Category

📝 Abstract
Facial behavior synthesis remains a critical yet underexplored challenge. While text-to-face models have made progress, they often rely on coarse emotion categories, which lack the nuance needed to capture the full spectrum of human nonverbal communication. Action Units (AUs) provide a more precise and anatomically grounded alternative. However, current AU-based approaches typically encode AUs as one-hot vectors, modeling compound expressions as simple linear combinations of individual AUs. This linearity becomes problematic when handling conflicting AUs--defined as those which activate the same facial muscle with opposing actions. Such cases lead to anatomically implausible artifacts and unnatural motion superpositions. To address this, we propose a novel method that represents facial behavior through natural language descriptions of AUs. This approach preserves the expressiveness of the AU framework while enabling explicit modeling of complex and conflicting AUs. It also unlocks the potential of modern text-to-image models for high-fidelity facial synthesis. Supporting this direction, we introduce BP4D-AUText, the first large-scale text-image paired dataset for complex facial behavior. It is synthesized by applying a rule-based Dynamic AU Text Processor to the BP4D and BP4D+ datasets. We further propose VQ-AUFace, a generative model that leverages facial structural priors to synthesize realistic and diverse facial behaviors from text. Extensive quantitative experiments and user studies demonstrate that our approach significantly outperforms existing methods. It excels in generating facial expressions that are anatomically plausible, behaviorally rich, and perceptually convincing, particularly under challenging conditions involving conflicting AUs.
Problem

Research questions and friction points this paper is trying to address.

Facial Behavior Synthesis
Action Units
Conflicting AUs
Anatomical Plausibility
Nonverbal Communication
Innovation

Methods, ideas, or system contributions that make the work stand out.

Action Units
Facial Behavior Synthesis
Natural Language Representation
Conflicting AUs
Text-to-Face Generation
🔎 Similar Papers
No similar papers found.
J
Jiahe Wang
University of Science and Technology of China, Hefei, China
Cong Liang
Cong Liang
University of Science and Technology of China
Affective computation
X
Xuandong Huang
University of Science and Technology of China, Hefei, China
Y
Yuxin Wang
University of Science and Technology of China, Hefei, China
X
Xin Yun
University of Science and Technology of China, Hefei, China
Yi Wu
Yi Wu
University of Science and Technology of China
AIGCMultimodal
Y
Yanan Chang
University of Science and Technology of China, Hefei, China
S
Shangfei Wang
University of Science and Technology of China, Hefei, China