Detecting Generated Images by Fitting Natural Image Distributions

📅 2025-11-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing binary classification methods for detecting generated image abuse heavily rely on the quantity and quality of synthetic training samples, limiting their practicality and generalizability. Method: We propose an unsupervised image authenticity verification framework that exploits geometric discrepancies between natural and generated images on the data manifold. Specifically, we design a dual-branch self-supervised objective to enforce output consistency for natural images while inducing divergence for generated ones; incorporate normalized normalizing flows to amplify manifold shifts induced by state-of-the-art generators; and enhance discriminability via orthogonal manifold gradient properties and gradient subspace modeling. Contribution/Results: The method requires no paired real–generated labels and trains solely on unlabeled natural images. Extensive evaluation across diverse SOTA generative models—including Stable Diffusion and various GANs—demonstrates high detection accuracy, strong generalization to unseen generators, and robustness against post-processing attacks.

Technology Category

Application Category

📝 Abstract
The increasing realism of generated images has raised significant concerns about their potential misuse, necessitating robust detection methods. Current approaches mainly rely on training binary classifiers, which depend heavily on the quantity and quality of available generated images. In this work, we propose a novel framework that exploits geometric differences between the data manifolds of natural and generated images. To exploit this difference, we employ a pair of functions engineered to yield consistent outputs for natural images but divergent outputs for generated ones, leveraging the property that their gradients reside in mutually orthogonal subspaces. This design enables a simple yet effective detection method: an image is identified as generated if a transformation along its data manifold induces a significant change in the loss value of a self-supervised model pre-trained on natural images. Further more, to address diminishing manifold disparities in advanced generative models, we leverage normalizing flows to amplify detectable differences by extruding generated images away from the natural image manifold. Extensive experiments demonstrate the efficacy of this method. Code is available at https://github.com/tmlr-group/ConV.
Problem

Research questions and friction points this paper is trying to address.

Detect generated images using natural image manifold differences
Amplify detection differences with normalizing flow techniques
Overcome limitations of binary classifier-based detection methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Exploiting geometric differences between natural and generated image manifolds
Using orthogonal gradient subspaces to detect generated images
Applying normalizing flows to amplify detectable manifold differences
🔎 Similar Papers
No similar papers found.