đ¤ AI Summary
Current talking-head video generation lacks a multidimensional, reproducible evaluation framework; existing metrics inadequately balance visual quality, motion naturalness, and lip-sync accuracy. To address this, we propose the first 3D evaluation framework for talking-head videosâcomprising Quality, Naturalness, and Synchrony dimensionsâfeaturing eight efficient, human-aligned automatic metrics. We systematically model fine-grained dynamic facial attributesâincluding head pose, lip articulation, and eyebrow motionâfor the first time, and release the first bias-mitigated, real-world benchmark dataset. Leveraging multimodal analysis, facial keypoint tracking, and statistical consistency testingâvalidated through large-scale user studiesâwe evaluate 17 state-of-the-art models on 85,000 videos, uncovering prevalent bottlenecks such as expression distortion and artifact generation. We open-source an extensible benchmark platform with continuous leaderboard maintenance.
đ Abstract
Video generation has achieved remarkable progress, with generated videos increasingly resembling real ones. However, the rapid advance in generation has outpaced the development of adequate evaluation metrics. Currently, the assessment of talking head generation primarily relies on limited metrics, evaluating general video quality, lip synchronization, and on conducting user studies. Motivated by this, we propose a new evaluation framework comprising 8 metrics related to three dimensions (i) quality, (ii) naturalness, and (iii) synchronization. In selecting the metrics, we place emphasis on efficiency, as well as alignment with human preferences. Based on this considerations, we streamline to analyze fine-grained dynamics of head, mouth, and eyebrows, as well as face quality. Our extensive experiments on 85,000 videos generated by 17 state-of-the-art models suggest that while many algorithms excel in lip synchronization, they face challenges with generating expressiveness and artifact-free details. These videos were generated based on a novel real dataset, that we have curated, in order to mitigate bias of training data. Our proposed benchmark framework is aimed at evaluating the improvement of generative methods. Original code, dataset and leaderboards will be publicly released and regularly updated with new methods, in order to reflect progress in the field.