Beyond Face Swapping: A Diffusion-Based Digital Human Benchmark for Multimodal Deepfake Detection

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion-based multimodal digital human forgery poses novel challenges to existing deepfake detection methods. Method: We introduce DigiFakeAV—the first large-scale, multi-ethnic, multi-scenario benchmark comprising 60,000 videos and 8.4 million frames—and propose DigiShield, a multimodal collaborative detection framework. DigiShield innovatively fuses spatiotemporal video features with semantic-acoustic audio features, focusing on subtle temporal artifacts in facial dynamics via cross-modal feature alignment and fine-grained temporal modeling. Contribution/Results: Experiments show that DigiFakeAV significantly degrades the AUC of mainstream SOTA detectors. DigiShield achieves state-of-the-art performance on both DigiFakeAV and DF-TIMIT, with a user confusion rate of 68%, demonstrating strong generalization and robustness against diffusion-driven digital human forgeries.

Technology Category

Application Category

📝 Abstract
In recent years, the rapid development of deepfake technology has given rise to an emerging and serious threat to public security: diffusion model-based digital human generation. Unlike traditional face manipulation methods, such models can generate highly realistic videos with consistency through multimodal control signals. Their flexibility and covertness pose severe challenges to existing detection strategies. To bridge this gap, we introduce DigiFakeAV, the first large-scale multimodal digital human forgery dataset based on diffusion models. Employing five latest digital human generation methods (Sonic, Hallo, etc.) and voice cloning method, we systematically produce a dataset comprising 60,000 videos (8.4 million frames), covering multiple nationalities, skin tones, genders, and real-world scenarios, significantly enhancing data diversity and realism. User studies show that the confusion rate between forged and real videos reaches 68%, and existing state-of-the-art (SOTA) detection models exhibit large drops in AUC values on DigiFakeAV, highlighting the challenge of the dataset. To address this problem, we further propose DigiShield, a detection baseline based on spatiotemporal and cross-modal fusion. By jointly modeling the 3D spatiotemporal features of videos and the semantic-acoustic features of audio, DigiShield achieves SOTA performance on both the DigiFakeAV and DF-TIMIT datasets. Experiments show that this method effectively identifies covert artifacts through fine-grained analysis of the temporal evolution of facial features in synthetic videos.
Problem

Research questions and friction points this paper is trying to address.

Detecting diffusion-based deepfake videos with multimodal control signals
Addressing limitations of existing detection strategies on realistic forgeries
Improving detection accuracy for synthetic facial and acoustic artifacts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion-based digital human generation dataset
Spatiotemporal and cross-modal fusion detection
Multimodal control signals for realistic videos
🔎 Similar Papers
No similar papers found.
J
Jiaxin Liu
Beijing University of Posts and Telecommunications
J
Jia Wang
Beijing University of Posts and Telecommunications
Saihui Hou
Saihui Hou
Beijing Normal University
Deep LearningComputer VisionMultimodal Large Language Models
Min Ren
Min Ren
Continental Advanced Lidar Solutions US, LLC
PhotonicsAvalanche PhotodiodesSingle Photon DectectorsSingle Photon DetectionLidar
H
Huijia Wu
Beijing University of Posts and Telecommunications
Z
Zhaofeng He
Beijing University of Posts and Telecommunications