🤖 AI Summary
This work proposes the first end-to-end audio-driven avatar large language model that jointly reasons over linguistic semantics, speech prosody, and 3D facial dynamics within a unified framework, overcoming the limitations of conventional cascaded architectures—such as error propagation, high latency, and inadequate emotional expressiveness—as well as the narrow focus on lip-sync accuracy in prior approaches. By introducing the FLAME-QA dataset, a high-quality multimodal question-answering benchmark for cross-modal alignment, and a semantics-driven 3D facial animation generation technique, the system achieves real-time performance (500 ms latency, 0.7 RTF) while significantly enhancing emotional expressiveness, outperforming existing cascaded systems.
📝 Abstract
Developing expressive and responsive conversational digital humans is a cornerstone of next-generation human-computer interaction. While large language models (LLMs) have significantly enhanced dialogue capabilities, most current systems still rely on cascaded architectures that connect independent modules. These pipelines are often plagued by accumulated errors, high latency, and poor real-time performance. Lacking access to the underlying conversational context, these pipelines inherently prioritize rigid lip-sync over emotional depth. To address these challenges, we propose A$^2$-LLM, an end-to-end conversational audio avatar large language model that jointly reasons about language, audio prosody, and 3D facial motion within a unified framework. To facilitate training, we introduce FLAME-QA, a high-quality multimodal dataset designed to align semantic intent with expressive facial dynamics within a QA format. By leveraging deep semantic understanding, A$^2$-LLM generates emotionally rich facial movements beyond simple lip-synchronization. Experimental results demonstrate that our system achieves superior emotional expressiveness while maintaining real-time efficiency (500 ms latency, 0.7 RTF).