So-Fake: Benchmarking and Explaining Social Media Image Forgery Detection

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The increasing realism of AI-generated images on social media, coupled with poor generalization of existing detection methods and insufficient scale/authenticity of benchmark datasets, hinders robust forgery detection in real-world social scenarios. Method: We introduce So-Fake-Set—a million-scale, high-fidelity benchmark (2M samples) specifically designed for social-media forgery detection—and So-Fake-OOD, a rigorously constructed out-of-distribution evaluation set (100K cross-domain samples). We further propose So-Fake-R1, the first reinforcement learning–based vision-language framework that jointly performs detection, localization, and interpretable visual rationale generation. It integrates multimodal modeling, cross-domain generalization training, and explainable AI techniques. Contribution/Results: Extensive experiments demonstrate that So-Fake-R1 achieves state-of-the-art performance, improving detection accuracy by 1.3% and localization IoU by 4.5% over prior methods. All code, models, and datasets are fully open-sourced to foster reproducible research and community advancement.

Technology Category

Application Category

📝 Abstract
Recent advances in AI-powered generative models have enabled the creation of increasingly realistic synthetic images, posing significant risks to information integrity and public trust on social media platforms. While robust detection frameworks and diverse, large-scale datasets are essential to mitigate these risks, existing academic efforts remain limited in scope: current datasets lack the diversity, scale, and realism required for social media contexts, while detection methods struggle with generalization to unseen generative technologies. To bridge this gap, we introduce So-Fake-Set, a comprehensive social media-oriented dataset with over 2 million high-quality images, diverse generative sources, and photorealistic imagery synthesized using 35 state-of-the-art generative models. To rigorously evaluate cross-domain robustness, we establish a novel and large-scale (100K) out-of-domain benchmark (So-Fake-OOD) featuring synthetic imagery from commercial models explicitly excluded from the training distribution, creating a realistic testbed for evaluating real-world performance. Leveraging these resources, we present So-Fake-R1, an advanced vision-language framework that employs reinforcement learning for highly accurate forgery detection, precise localization, and explainable inference through interpretable visual rationales. Extensive experiments show that So-Fake-R1 outperforms the second-best method, with a 1.3% gain in detection accuracy and a 4.5% increase in localization IoU. By integrating a scalable dataset, a challenging OOD benchmark, and an advanced detection framework, this work establishes a new foundation for social media-centric forgery detection research. The code, models, and datasets will be released publicly.
Problem

Research questions and friction points this paper is trying to address.

Detect synthetic images on social media platforms
Improve generalization of forgery detection methods
Provide explainable visual rationales for detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

So-Fake-Set dataset with 2M diverse images
So-Fake-OOD benchmark for cross-domain testing
So-Fake-R1 vision-language framework with reinforcement learning
🔎 Similar Papers
No similar papers found.