🤖 AI Summary
The escalating photorealism of AI-generated images has severely undermined the reliability of existing deepfake detection methods. To address this, we introduce DFBench—the first large-scale, multi-source deepfake image benchmark comprising 540K images spanning 12 state-of-the-art generative models—enabling bidirectional evaluation of both detection accuracy and generative model evasion capability. We conduct the first systematic zero-shot assessment of large multimodal models (LMMs) for deepfake detection and propose MoA-DF, a hybrid proxy method that robustly aggregates output probabilities from multiple LMMs to achieve zero-shot detection. Experiments demonstrate that MoA-DF achieves state-of-the-art performance on DFBench. To foster reproducible research, we fully open-source the benchmark dataset, implementation code, and evaluation framework—establishing a foundational resource for advancing deepfake detection and adversarial studies between detection systems and generative models.
📝 Abstract
With the rapid advancement of generative models, the realism of AI-generated images has significantly improved, posing critical challenges for verifying digital content authenticity. Current deepfake detection methods often depend on datasets with limited generation models and content diversity that fail to keep pace with the evolving complexity and increasing realism of the AI-generated content. Large multimodal models (LMMs), widely adopted in various vision tasks, have demonstrated strong zero-shot capabilities, yet their potential in deepfake detection remains largely unexplored. To bridge this gap, we present extbf{DFBench}, a large-scale DeepFake Benchmark featuring (i) broad diversity, including 540,000 images across real, AI-edited, and AI-generated content, (ii) latest model, the fake images are generated by 12 state-of-the-art generation models, and (iii) bidirectional benchmarking and evaluating for both the detection accuracy of deepfake detectors and the evasion capability of generative models. Based on DFBench, we propose extbf{MoA-DF}, Mixture of Agents for DeepFake detection, leveraging a combined probability strategy from multiple LMMs. MoA-DF achieves state-of-the-art performance, further proving the effectiveness of leveraging LMMs for deepfake detection. Database and codes are publicly available at https://github.com/IntMeGroup/DFBench.