Realistic Lip Motion Generation Based on 3D Dynamic Viseme and Coarticulation Modeling for Human-Robot Interaction

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the unnatural lip-sync in humanoid robots during human-robot interaction driven by speech. To tackle this issue, the authors propose a lightweight lip-motion generation method that integrates 3D dynamic viseme priors with a phoneme-initial/final decoupling strategy grounded in Chinese phonetic theory. They construct an ARKit-compliant dynamic viseme library and introduce an energy modulation mechanism to model coarticulation effects. Furthermore, a real-time retargeting strategy is designed to map high-dimensional lip motions onto a 14-degree-of-freedom actuation system. Experiments on a physical robot platform demonstrate that the proposed approach effectively mitigates motion conflicts in continuous speech, achieving high accuracy and smoothness in generated lip movements as measured by PCC and MAJ metrics.
📝 Abstract
Realistic lip synchronization is essential for the natural human-robot non-verbal interaction of humanoid robots. Motivated by this need, this paper presents a lip motion generation framework based on 3D dynamic viseme and coarticulation modeling. By analyzing Chinese pronunciation theory, a 3D dynamic viseme library is constructed based on the ARKit standard, which offers coherent prior trajectories of lips. To resolve motion conflicts within continuous speech streams, a coarticulation mechanism is developed by incorporating initial-final (Shengmu-Yunmu) decoupling and energy modulation. After developing a strategy to retarget high-dimensional spatial lip motion to a 14-DOF lip actuation system of a humanoid head platform, the efficiency and accuracy of the proposed architecture is experimentally validated and demonstrated with quantitative ablation experiments using the metrics of the Pearson Correlation Coefficient (PCC) and the Mean Absolute Jerk (MAJ). This research offers a lightweight, efficient, and highly practical paradigm for the speech-driven lip motion generation of humanoid robots. The 3D dynamic viseme library and real-world deployment videos are available at {https://github.com/yuesheng21/Phoneme-to-Lip-14DOF}
Problem

Research questions and friction points this paper is trying to address.

lip motion generation
human-robot interaction
viseme
coarticulation
speech-driven animation
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D dynamic viseme
coarticulation modeling
lip motion generation
humanoid robot
speech-driven animation
🔎 Similar Papers
No similar papers found.
Sheng Li
Sheng Li
Quantitative Foundation Associate Professor of Data Science, University of Virginia
Trustworthy AIMachine LearningCausal InferenceComputer VisionAI for Education
J
Jingcheng Huang
The State Key Laboratory of Intelligent Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074, China
M
Min Li
The State Key Laboratory of Intelligent Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074, China