DFBench: Benchmarking Deepfake Image Detection Capability of Large Multimodal Models

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The escalating photorealism of AI-generated images has severely undermined the reliability of existing deepfake detection methods. To address this, we introduce DFBench—the first large-scale, multi-source deepfake image benchmark comprising 540K images spanning 12 state-of-the-art generative models—enabling bidirectional evaluation of both detection accuracy and generative model evasion capability. We conduct the first systematic zero-shot assessment of large multimodal models (LMMs) for deepfake detection and propose MoA-DF, a hybrid proxy method that robustly aggregates output probabilities from multiple LMMs to achieve zero-shot detection. Experiments demonstrate that MoA-DF achieves state-of-the-art performance on DFBench. To foster reproducible research, we fully open-source the benchmark dataset, implementation code, and evaluation framework—establishing a foundational resource for advancing deepfake detection and adversarial studies between detection systems and generative models.

Technology Category

Application Category

📝 Abstract
With the rapid advancement of generative models, the realism of AI-generated images has significantly improved, posing critical challenges for verifying digital content authenticity. Current deepfake detection methods often depend on datasets with limited generation models and content diversity that fail to keep pace with the evolving complexity and increasing realism of the AI-generated content. Large multimodal models (LMMs), widely adopted in various vision tasks, have demonstrated strong zero-shot capabilities, yet their potential in deepfake detection remains largely unexplored. To bridge this gap, we present extbf{DFBench}, a large-scale DeepFake Benchmark featuring (i) broad diversity, including 540,000 images across real, AI-edited, and AI-generated content, (ii) latest model, the fake images are generated by 12 state-of-the-art generation models, and (iii) bidirectional benchmarking and evaluating for both the detection accuracy of deepfake detectors and the evasion capability of generative models. Based on DFBench, we propose extbf{MoA-DF}, Mixture of Agents for DeepFake detection, leveraging a combined probability strategy from multiple LMMs. MoA-DF achieves state-of-the-art performance, further proving the effectiveness of leveraging LMMs for deepfake detection. Database and codes are publicly available at https://github.com/IntMeGroup/DFBench.
Problem

Research questions and friction points this paper is trying to address.

Evaluating large multimodal models' deepfake detection capabilities
Addressing limited diversity in current deepfake detection datasets
Exploring zero-shot potential of LMMs for authenticating digital content
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale DeepFake Benchmark (DFBench)
Mixture of Agents for DeepFake detection (MoA-DF)
Leveraging multiple Large Multimodal Models (LMMs)
🔎 Similar Papers
No similar papers found.
J
Jiarui Wang
Shanghai Jiao Tong University, Shanghai, China.
Huiyu Duan
Huiyu Duan
Shanghai Jiao Tong University
Multimedia Signal Processing
Juntong Wang
Juntong Wang
Shanghai Jiao Tong University
VQALMMsRL
Ziheng Jia
Ziheng Jia
Shanghai Jiaotong University / Shanghai AILab
LLM and LMM on Visual Quality Assessment
W
Woo Yi Yang
Shanghai Jiao Tong University, Shanghai, China.
X
Xiaorong Zhu
Shanghai Jiao Tong University, Shanghai, China.
Y
Yu Zhao
Shanghai Jiao Tong University, Shanghai, China.
Jiaying Qian
Jiaying Qian
Unknown affiliation
Y
Yuke Xing
Shanghai Jiao Tong University, Shanghai, China.
Guangtao Zhai
Guangtao Zhai
Professor, IEEE Fellow, Shanghai Jiao Tong University
Multimedia Signal ProcessingVisual Quality AssessmentQoEAI EvaluationDisplays
X
Xiongkuo Min
Shanghai Jiao Tong University, Shanghai, China.