🤖 AI Summary
Addressing the challenge of low-latency, high-fidelity audio-driven 3D facial animation in virtual reality, this paper proposes an end-to-end real-time generation method. Methodologically, we introduce (1) an online Transformer architecture that processes only historical and current audio frames—eliminating reliance on future speech; (2) a single-step denoising distillation strategy to drastically accelerate diffusion-based inference; and (3) an encoder–online Transformer–decoder framework enabling multimodal control, including emotion modulation and eye movement synthesis. The system achieves an end-to-end latency of <15 ms, surpasses state-of-the-art offline methods in facial expression accuracy across multilingual speech scenarios, and accelerates inference by 100–1000×. It has been successfully deployed in a real-time VR social interaction demonstration, validating its practical efficacy for immersive, interactive applications.
📝 Abstract
We present an audio-driven real-time system for animating photorealistic 3D facial avatars with minimal latency, designed for social interactions in virtual reality for anyone. Central to our approach is an encoder model that transforms audio signals into latent facial expression sequences in real time, which are then decoded as photorealistic 3D facial avatars. Leveraging the generative capabilities of diffusion models, we capture the rich spectrum of facial expressions necessary for natural communication while achieving real-time performance (<15ms GPU time). Our novel architecture minimizes latency through two key innovations: an online transformer that eliminates dependency on future inputs and a distillation pipeline that accelerates iterative denoising into a single step. We further address critical design challenges in live scenarios for processing continuous audio signals frame-by-frame while maintaining consistent animation quality. The versatility of our framework extends to multimodal applications, including semantic modalities such as emotion conditions and multimodal sensors with head-mounted eye cameras on VR headsets. Experimental results demonstrate significant improvements in facial animation accuracy over existing offline state-of-the-art baselines, achieving 100 to 1000 times faster inference speed. We validate our approach through live VR demonstrations and across various scenarios such as multilingual speeches.