A$^2$-LLM: An End-to-end Conversational Audio Avatar Large Language Model

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes the first end-to-end audio-driven avatar large language model that jointly reasons over linguistic semantics, speech prosody, and 3D facial dynamics within a unified framework, overcoming the limitations of conventional cascaded architectures—such as error propagation, high latency, and inadequate emotional expressiveness—as well as the narrow focus on lip-sync accuracy in prior approaches. By introducing the FLAME-QA dataset, a high-quality multimodal question-answering benchmark for cross-modal alignment, and a semantics-driven 3D facial animation generation technique, the system achieves real-time performance (500 ms latency, 0.7 RTF) while significantly enhancing emotional expressiveness, outperforming existing cascaded systems.

Technology Category

Application Category

📝 Abstract
Developing expressive and responsive conversational digital humans is a cornerstone of next-generation human-computer interaction. While large language models (LLMs) have significantly enhanced dialogue capabilities, most current systems still rely on cascaded architectures that connect independent modules. These pipelines are often plagued by accumulated errors, high latency, and poor real-time performance. Lacking access to the underlying conversational context, these pipelines inherently prioritize rigid lip-sync over emotional depth. To address these challenges, we propose A$^2$-LLM, an end-to-end conversational audio avatar large language model that jointly reasons about language, audio prosody, and 3D facial motion within a unified framework. To facilitate training, we introduce FLAME-QA, a high-quality multimodal dataset designed to align semantic intent with expressive facial dynamics within a QA format. By leveraging deep semantic understanding, A$^2$-LLM generates emotionally rich facial movements beyond simple lip-synchronization. Experimental results demonstrate that our system achieves superior emotional expressiveness while maintaining real-time efficiency (500 ms latency, 0.7 RTF).
Problem

Research questions and friction points this paper is trying to address.

conversational digital humans
cascaded architectures
emotional expressiveness
real-time performance
lip-sync
Innovation

Methods, ideas, or system contributions that make the work stand out.

end-to-end
conversational avatar
multimodal LLM
emotional expressiveness
real-time facial animation
🔎 Similar Papers
No similar papers found.
Xiaolin Hu
Xiaolin Hu
Professor of Computer Science, Georgia State University
Modeling and simulationagent and multi-agent systmes
H
Hang Yuan
School of Finance and Statistics, East China Normal University, Shanghai, China; Zhongguancun Institute of Artificial Intelligence, Beijing, China
X
Xinzhu Sang
State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing, China
B
Binbin Yan
State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing, China
Z
Zhou Yu
School of Finance and Statistics, East China Normal University, Shanghai, China
Cong Huang
Cong Huang
University of Science and Technology of China
Image/Video processing
Kai Chen
Kai Chen
Institute of Photonics Technology, Jinan University, Guangzhou, China
plasmonicscolloidal self-assemblyinfrared spectroscopy