🤖 AI Summary
To address the limited generalization capability of diffusion-based image forgery detection methods, this paper presents the first systematic survey of generalized diffusion image forensics. Focusing on cross-model and cross-dataset generalizability, we propose a unified taxonomy that distinguishes two principal paradigms—data-driven and feature-driven—and six fine-grained technical pathways. We identify and analyze key research directions, including statistical modeling, frequency-domain analysis, latent-space anomaly detection, self-supervised representation learning, multi-model transfer, and meta-learning. Furthermore, we synthesize core open challenges and articulate future research paradigms for robust, generalizable detection. Concurrently, we curate an authoritative, open-source repository (hosted on GitHub) covering over 100 works, providing comprehensive support for algorithm design, benchmark development, and robustness enhancement in diffusion image forensics.
📝 Abstract
The rise of diffusion models has significantly improved the fidelity and diversity of generated images. With numerous benefits, these advancements also introduce new risks. Diffusion models can be exploited to create high-quality Deepfake images, which poses challenges for image authenticity verification. In recent years, research on generalizable diffusion-generated image detection has grown rapidly. However, a comprehensive review of this topic is still lacking. To bridge this gap, we present a systematic survey of recent advances and classify them into two main categories: (1) data-driven detection and (2) feature-driven detection. Existing detection methods are further classified into six fine-grained categories based on their underlying principles. Finally, we identify several open challenges and envision some future directions, with the hope of inspiring more research work on this important topic. Reviewed works in this survey can be found at https://github.com/zju-pi/Awesome-Diffusion-generated-Image-Detection.