DAWN: Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for Talking Head Video Generation

📅 2024-10-17
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing diffusion-based talking head video generation methods predominantly adopt autoregressive strategies, suffering from limited contextual modeling, error accumulation, and slow inference. This paper proposes the first non-autoregressive diffusion framework for speech-driven talking head synthesis, enabling full-frame parallel generation of variable-length videos. Methodologically: (1) it disentangles facial motion, head pose, and blinking dynamics in the latent space; (2) it introduces audio-conditioned joint generation with multi-component collaborative control. Experiments demonstrate that our approach significantly accelerates inference—by up to 10× compared to autoregressive baselines—while preserving high lip-sync accuracy, natural facial expressiveness, and smooth head motion. Moreover, it maintains superior fidelity and temporal stability during long-video extrapolation, outperforming state-of-the-art methods in both quantitative metrics and perceptual quality.

Technology Category

Application Category

📝 Abstract
Talking head generation intends to produce vivid and realistic talking head videos from a single portrait and speech audio clip. Although significant progress has been made in diffusion-based talking head generation, almost all methods rely on autoregressive strategies, which suffer from limited context utilization beyond the current generation step, error accumulation, and slower generation speed. To address these challenges, we present DAWN (Dynamic frame Avatar With Non-autoregressive diffusion), a framework that enables all-at-once generation of dynamic-length video sequences. Specifically, it consists of two main components: (1) audio-driven holistic facial dynamics generation in the latent motion space, and (2) audio-driven head pose and blink generation. Extensive experiments demonstrate that our method generates authentic and vivid videos with precise lip motions, and natural pose/blink movements. Additionally, with a high generation speed, DAWN possesses strong extrapolation capabilities, ensuring the stable production of high-quality long videos. These results highlight the considerable promise and potential impact of DAWN in the field of talking head video generation. Furthermore, we hope that DAWN sparks further exploration of non-autoregressive approaches in diffusion models. Our code will be publicly available at https://github.com/Hanbo-Cheng/DAWN-pytorch.
Problem

Research questions and friction points this paper is trying to address.

Generates talking head videos from single portrait and audio
Overcomes autoregressive limitations with non-autoregressive diffusion
Ensures high-quality lip sync and natural head movements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-autoregressive diffusion for video generation
Holistic facial dynamics in latent space
Head pose and blink generation from audio
🔎 Similar Papers
No similar papers found.
Hanbo Cheng
Hanbo Cheng
University of Science and Technology of China
L
Limin Lin
University of Science and Technology of China
C
Chenyu Liu
IFLYTEK Research
Pengcheng Xia
Pengcheng Xia
Huazhong University of Science and Technology
Blockchain SecuritySoftware Security
P
Pengfei Hu
University of Science and Technology of China
J
Jie Ma
University of Science and Technology of China
J
Jun Du
University of Science and Technology of China
J
Jia Pan
IFLYTEK Research