🤖 AI Summary
To address security risks arising from increasingly photorealistic images generated by diffusion models, this paper proposes an interpretable detection method based on the inverse multi-step noise addition process. Methodologically, it is the first to uncover systematic frequency-domain discrepancies between natural and synthetic images during reverse denoising, modeling high-frequency Fourier spectral features and introducing a temporal noise-augmented ensemble detection framework. Furthermore, a Grad-CAM–enhanced interpretability generation and refinement module enables precise localization of forged regions. The work establishes two novel benchmarks: GenHard (for high-difficulty detection) and GenExplain (for interpretability evaluation). Experiments demonstrate state-of-the-art performance—98.91% and 95.89% detection accuracy on standard and challenging samples, respectively—surpassing prior methods by ≥2.51%, with strong cross-model generalization. Code and datasets are publicly released.
📝 Abstract
Recent advances in diffusion models have enabled the creation of deceptively real images, posing significant security risks when misused. In this study, we reveal that natural and synthetic images exhibit distinct differences in the high-frequency domains of their Fourier power spectra after undergoing iterative noise perturbations through an inverse multi-step denoising process, suggesting that such noise can provide additional discriminative information for identifying synthetic images. Based on this observation, we propose a novel detection method that amplifies these differences by progressively adding noise to the original images across multiple timesteps, and train an ensemble of classifiers on these noised images. To enhance human comprehension, we introduce an explanation generation and refinement module to identify flaws located in AI-generated images. Additionally, we construct two new datasets, GenHard and GenExplain, derived from the GenImage benchmark, providing detection samples of greater difficulty and high-quality rationales for fake images. Extensive experiments show that our method achieves state-of-the-art performance with 98.91% and 95.89% detection accuracy on regular and harder samples, increasing a minimal of 2.51% and 3.46% compared to baselines. Furthermore, our method also generalizes effectively to images generated by other diffusion models. Our code and datasets will be made publicly available.