🤖 AI Summary
Existing deepfake detection benchmarks suffer from outdated generation methods, low realism, and limited modality diversity, hindering robust detection of high-fidelity synthetic images. To address this, we introduce the first large-scale, politically sensitive, open-source benchmark—comprising 3 million real images and 963,000 high-quality, multi-model synthetic images generated by state-of-the-art diffusion models—and incorporate descriptive text-image alignment to enable cross-modal detection. We further propose a long-term evolutionary crowdsourced adversarial platform that continuously integrates community-submitted hard examples to drive dynamic, iterative model refinement. Experimental results demonstrate substantial improvements in model generalization under high-realism conditions. The platform has already attracted broad participation from academia and industry, shifting deepfake detection from static evaluation toward a sustainable, co-evolutionary paradigm.
📝 Abstract
Deepfakes, synthetic media created using advanced AI techniques, have intensified the spread of misinformation, particularly in politically sensitive contexts. Existing deepfake detection datasets are often limited, relying on outdated generation methods, low realism, or single-face imagery, restricting the effectiveness for general synthetic image detection. By analyzing social media posts, we identify multiple modalities through which deepfakes propagate misinformation. Furthermore, our human perception study demonstrates that recently developed proprietary models produce synthetic images increasingly indistinguishable from real ones, complicating accurate identification by the general public. Consequently, we present a comprehensive, politically-focused dataset specifically crafted for benchmarking detection against modern generative models. This dataset contains three million real images paired with descriptive captions, which are used for generating 963k corresponding high-quality synthetic images from a mix of proprietary and open-source models. Recognizing the continual evolution of generative techniques, we introduce an innovative crowdsourced adversarial platform, where participants are incentivized to generate and submit challenging synthetic images. This ongoing community-driven initiative ensures that deepfake detection methods remain robust and adaptive, proactively safeguarding public discourse from sophisticated misinformation threats.