THEval. Evaluation Framework for Talking Head Video Generation

📅 2025-11-06
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
Current talking-head video generation lacks a multidimensional, reproducible evaluation framework; existing metrics inadequately balance visual quality, motion naturalness, and lip-sync accuracy. To address this, we propose the first 3D evaluation framework for talking-head videos—comprising Quality, Naturalness, and Synchrony dimensions—featuring eight efficient, human-aligned automatic metrics. We systematically model fine-grained dynamic facial attributes—including head pose, lip articulation, and eyebrow motion—for the first time, and release the first bias-mitigated, real-world benchmark dataset. Leveraging multimodal analysis, facial keypoint tracking, and statistical consistency testing—validated through large-scale user studies—we evaluate 17 state-of-the-art models on 85,000 videos, uncovering prevalent bottlenecks such as expression distortion and artifact generation. We open-source an extensible benchmark platform with continuous leaderboard maintenance.

Technology Category

Application Category

📝 Abstract
Video generation has achieved remarkable progress, with generated videos increasingly resembling real ones. However, the rapid advance in generation has outpaced the development of adequate evaluation metrics. Currently, the assessment of talking head generation primarily relies on limited metrics, evaluating general video quality, lip synchronization, and on conducting user studies. Motivated by this, we propose a new evaluation framework comprising 8 metrics related to three dimensions (i) quality, (ii) naturalness, and (iii) synchronization. In selecting the metrics, we place emphasis on efficiency, as well as alignment with human preferences. Based on this considerations, we streamline to analyze fine-grained dynamics of head, mouth, and eyebrows, as well as face quality. Our extensive experiments on 85,000 videos generated by 17 state-of-the-art models suggest that while many algorithms excel in lip synchronization, they face challenges with generating expressiveness and artifact-free details. These videos were generated based on a novel real dataset, that we have curated, in order to mitigate bias of training data. Our proposed benchmark framework is aimed at evaluating the improvement of generative methods. Original code, dataset and leaderboards will be publicly released and regularly updated with new methods, in order to reflect progress in the field.
Problem

Research questions and friction points this paper is trying to address.

Current metrics inadequately evaluate talking head video generation quality
Existing methods struggle with facial expressiveness and artifact-free details
The framework assesses video naturalness, synchronization and quality dimensions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes 8-metric framework for video evaluation
Analyzes fine-grained facial dynamics and quality
Uses novel real dataset to mitigate training bias
🔎 Similar Papers
No similar papers found.
N
Nabyl Quignon
Inria Centre at Université Côte d’Azur, France
B
Baptiste Chopin
Inria Centre at Université Côte d’Azur, France
Yaohui Wang
Yaohui Wang
Research Scientist, Shanghai AI Laboratory | Inria
Machine LearningDeep Generative ModelsVideo Generation
A
A. Dantcheva
Inria Centre at Université Côte d’Azur, France