🤖 AI Summary
Existing deepfake detection benchmarks lag behind generative model advancements, relying on outdated generators and failing to assess detector robustness and generalization across diverse generators and identities.
Method: We introduce the first multimodal deepfake detection benchmark targeting state-of-the-art academic and commercial talking-head generators. Our framework supports multi-generator, multi-identity, and multi-protocol evaluation—enabling, for the first time, systematic generalization assessment under distribution shift. We further integrate Grad-CAM-based interpretability analysis to diagnose detection bias and release a high-quality dataset with standardized evaluation protocols.
Contribution/Results: Experiments reveal substantial performance degradation of mainstream detectors in cross-generator and cross-identity settings, exposing critical generalization bottlenecks. This work establishes a reproducible benchmark and diagnostic toolkit to advance robust deepfake detection research.
📝 Abstract
The rapid advancement of talking-head deepfake generation fueled by advanced generative models has elevated the realism of synthetic videos to a level that poses substantial risks in domains such as media, politics, and finance. However, current benchmarks for deepfake talking-head detection fail to reflect this progress, relying on outdated generators and offering limited insight into model robustness and generalization. We introduce TalkingHeadBench, a comprehensive multi-model multi-generator benchmark and curated dataset designed to evaluate the performance of state-of-the-art detectors on the most advanced generators. Our dataset includes deepfakes synthesized by leading academic and commercial models and features carefully constructed protocols to assess generalization under distribution shifts in identity and generator characteristics. We benchmark a diverse set of existing detection methods, including CNNs, vision transformers, and temporal models, and analyze their robustness and generalization capabilities. In addition, we provide error analysis using Grad-CAM visualizations to expose common failure modes and detector biases. TalkingHeadBench is hosted on https://huggingface.co/datasets/luchaoqi/TalkingHeadBench with open access to all data splits and protocols. Our benchmark aims to accelerate research towards more robust and generalizable detection models in the face of rapidly evolving generative techniques.