Dimitra: Audio-driven Diffusion model for Expressive Talking Head Generation

📅 2025-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses audio-driven expressive talking-head generation, jointly modeling lip motion, facial expression, and head pose. Methodologically, it introduces the first unified motion modeling framework based on a conditional Motion Diffusion Transformer, innovatively decoupling phoneme-level audio features (for lip articulation) from text transcriptions (for expression and head pose control). It employs 3D facial motion representations and multi-granularity audio feature extraction. Evaluated on VoxCeleb2 and HDTF, the method outperforms state-of-the-art approaches across all major metrics: lip motion accuracy (LMD), facial dynamics diversity (FDD), and head motion realism (HMD). Qualitative results further demonstrate superior visual fidelity and temporal coherence in generated videos.

Technology Category

Application Category

📝 Abstract
We propose Dimitra, a novel framework for audio-driven talking head generation, streamlined to learn lip motion, facial expression, as well as head pose motion. Specifically, we train a conditional Motion Diffusion Transformer (cMDT) by modeling facial motion sequences with 3D representation. We condition the cMDT with only two input signals, an audio-sequence, as well as a reference facial image. By extracting additional features directly from audio, Dimitra is able to increase quality and realism of generated videos. In particular, phoneme sequences contribute to the realism of lip motion, whereas text transcript to facial expression and head pose realism. Quantitative and qualitative experiments on two widely employed datasets, VoxCeleb2 and HDTF, showcase that Dimitra is able to outperform existing approaches for generating realistic talking heads imparting lip motion, facial expression, and head pose.
Problem

Research questions and friction points this paper is trying to address.

Generates realistic talking heads from audio.
Improves lip motion, facial expression, and head pose.
Uses audio and facial image for enhanced video quality.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Audio-driven Diffusion model
Conditional Motion Diffusion Transformer
Phoneme sequences enhance realism
🔎 Similar Papers
No similar papers found.
B
Baptiste Chopin
Université Côte d'Azur, Inria, STARS Team, France
T
Tashvik Dhamija
Université Côte d'Azur, Inria, STARS Team, France
P
Pranav Balaji
Université Côte d'Azur, Inria, STARS Team, France
Yaohui Wang
Yaohui Wang
Research Scientist, Shanghai AI Laboratory | Inria
Machine LearningDeep Generative ModelsVideo Generation
Antitza Dantcheva
Antitza Dantcheva
Directrice de Recherche, Inria, France
Video generationDeepfake generation and detectionFace analysis for health monitoring and