Let Real Images be as a Judger, Spotting Fake Images Synthesized with Generative Models

📅 2024-03-25
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing forensic methods relying on hand-crafted artifact features suffer from poor generalization due to inconsistent artifacts across diverse generative models. Method: This paper proposes a universal image authenticity detection framework supervised by natural statistical traces—statistical regularities inherently shared by authentic images—thereby avoiding dependence on model-specific artifacts. We innovatively integrate natural-trace modeling into supervised contrastive learning, establishing a discriminative mechanism driven by the distance between an image and its natural statistical trace. The approach incorporates extended supervised contrastive learning, joint evaluation across twelve generative sources (six GANs and six diffusion models), and robustness testing against common geometric and photometric transformations. Contribution/Results: Evaluated on a high-quality, multi-model dataset curated by the authors, the method achieves 96.1% mAP; it further attains >78.4% accuracy on real-world Midjourney images. Code and partial data are publicly released.

Technology Category

Application Category

📝 Abstract
In the last few years, generative models have shown their powerful capabilities in synthesizing realistic images in both quality and diversity (i.e., facial images, and natural subjects). Unfortunately, the artifact patterns in fake images synthesized by different generative models are inconsistent, leading to the failure of previous research that relied on spotting subtle differences between real and fake. In our preliminary experiments, we find that the artifacts in fake images always change with the development of the generative model, while natural images exhibit stable statistical properties. In this paper, we employ natural traces shared only by real images as an additional predictive target in the detector. Specifically, the natural traces are learned from the wild real images and we introduce extended supervised contrastive learning to bring them closer to real images and further away from fake ones. This motivates the detector to make decisions based on the proximity of images to the natural traces. To conduct a comprehensive experiment, we built a high-quality and diverse dataset that includes generative models comprising 6 GAN and 6 diffusion models, to evaluate the effectiveness in generalizing unknown forgery techniques and robustness in surviving different transformations. Experimental results show that our proposed method gives 96.1% mAP significantly outperforms the baselines. Extensive experiments conducted on the widely recognized platform Midjourney reveal that our proposed method achieves an accuracy exceeding 78.4%, underscoring its practicality for real-world application deployment. The source code and partial self-built dataset are available in supplementary material.
Problem

Research questions and friction points this paper is trying to address.

Detect fake images using natural traces in real images
Improve robustness against evolving generative model artifacts
Enhance detection accuracy across diverse forgery techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised feature mapping for natural trace extraction
Transfer learning with soft contrastive loss
High-quality dataset for evaluating unknown forgery techniques
🔎 Similar Papers
No similar papers found.