🤖 AI Summary
This work addresses the significant performance degradation of existing AI-generated image detection methods based on reconstruction error when confronted with high-fidelity synthetic content. To overcome this limitation, the paper introduces a novel “difference-of-differences” paradigm that replaces conventional first-order reconstruction error with second-order differences of such errors. This approach effectively reduces noise variance and amplifies discriminative signals. By integrating diffusion models for image reconstruction and incorporating differential statistical analysis, the proposed method achieves state-of-the-art performance across multiple generative models—including Stable Diffusion and DALL·E—and benchmark datasets. The framework substantially enhances the generalization capability and robustness of detectors on high-fidelity generated images.
📝 Abstract
Diffusion models are able to produce AI-generated images that are almost indistinguishable from real ones. This raises concerns about their potential misuse and poses substantial challenges for detecting them. Many existing detectors rely on reconstruction error -- the difference between the input image and its reconstructed version -- as the basis for distinguishing real from fake images. However, these detectors become less effective as modern AI-generated images become increasingly similar to real ones. To address this challenge, we propose a novel difference-in-difference method. Instead of directly using the reconstruction error (a first-order difference), we compute the difference in reconstruction error -- a second-order difference -- for variance reduction and improving detection accuracy. Extensive experiments demonstrate that our method achieves strong generalization performance, enabling reliable detection of AI-generated images in the era of generative AI.