READ: Real-time and Efficient Asynchronous Diffusion for Audio-driven Talking Head Generation

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models suffer from slow inference speed, hindering real-time deployment in audio-driven talking head generation. To address this, we propose A2V-DiT—the first real-time diffusion Transformer framework—featuring co-designed spatiotemporal-compression VAE and speech autoencoder to establish a low-dimensional, cross-modal aligned audio-visual latent space. We further introduce Asynchronous Noise Scheduling (ANS), which decouples noise updates along the temporal dimension during denoising to jointly ensure temporal coherence and computational efficiency. Experiments demonstrate that A2V-DiT achieves real-time inference (≥25 FPS) while preserving high visual fidelity and precise lip-sync accuracy—significantly outperforming existing diffusion-based methods. Moreover, it exhibits superior stability and state-of-the-art performance in long-sequence generation.

Technology Category

Application Category

📝 Abstract
The introduction of diffusion models has brought significant advances to the field of audio-driven talking head generation. However, the extremely slow inference speed severely limits the practical implementation of diffusion-based talking head generation models. In this study, we propose READ, the first real-time diffusion-transformer-based talking head generation framework. Our approach first learns a spatiotemporal highly compressed video latent space via a temporal VAE, significantly reducing the token count to accelerate generation. To achieve better audio-visual alignment within this compressed latent space, a pre-trained Speech Autoencoder (SpeechAE) is proposed to generate temporally compressed speech latent codes corresponding to the video latent space. These latent representations are then modeled by a carefully designed Audio-to-Video Diffusion Transformer (A2V-DiT) backbone for efficient talking head synthesis. Furthermore, to ensure temporal consistency and accelerated inference in extended generation, we propose a novel asynchronous noise scheduler (ANS) for both the training and inference process of our framework. The ANS leverages asynchronous add-noise and asynchronous motion-guided generation in the latent space, ensuring consistency in generated video clips. Experimental results demonstrate that READ outperforms state-of-the-art methods by generating competitive talking head videos with significantly reduced runtime, achieving an optimal balance between quality and speed while maintaining robust metric stability in long-time generation.
Problem

Research questions and friction points this paper is trying to address.

Slow inference speed in diffusion-based talking head generation
Achieving real-time audio-driven talking head synthesis
Ensuring temporal consistency in extended video generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Temporal VAE compresses video latent space
SpeechAE aligns audio-visual latent codes
Asynchronous noise scheduler accelerates inference
🔎 Similar Papers
No similar papers found.
H
Haotian Wang
University of Science and Technology of China
Y
Yuzhe Weng
University of Science and Technology of China
J
Jun Du
University of Science and Technology of China
H
Haoran Xu
iFLYTEK
X
Xiaoyan Wu
iFLYTEK
S
Shan He
iFLYTEK
Bing Yin
Bing Yin
Amazon.com
NLPInformation RetrievalDeep LearningKnowledge Graphs
C
Cong Liu
iFLYTEK
J
Jianqing Gao
iFLYTEK
Qingfeng Liu
Qingfeng Liu
Professor, Hosei University
Econometrics