Noise-Informed Diffusion-Generated Image Detection With Anomaly Attention

πŸ“… 2025-06-20
πŸ›οΈ IEEE Transactions on Information Forensics and Security
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Detecting diffusion-generated images remains challenging, especially for unseen generative models not encountered during training. Method: This paper proposes a robust detection framework grounded in universal noise characteristics inherent to diffusion processes. Its core innovations are the Noise-Aware Self-Attention (NASA) module and the NASA-Swin detection architecture: NASA introduces noise-guided anomalous attention to enable cross-modal fusion of RGB and noise-domain features, while a channel-wise masking strategy enhances discriminability. Crucially, the method requires no model-specific priorsβ€”only the residual noise naturally preserved in diffusion outputs serves as the universal detection cue. Contribution/Results: The approach achieves state-of-the-art performance on cross-model generalization benchmarks, significantly improving detection accuracy and robustness against images from unknown diffusion models. Experimental validation confirms that residual noise constitutes a viable and effective universal signal for generalized synthetic image detection.

Technology Category

Application Category

πŸ“ Abstract
With the rapid development of image generation technologies, especially the advancement of Diffusion Models, the quality of synthesized images has significantly improved, raising concerns among researchers about information security. To mitigate the malicious abuse of diffusion models, diffusion-generated image detection has proven to be an effective countermeasure. However, a key challenge for forgery detection is generalising to diffusion models not seen during training. In this paper, we address this problem by focusing on image noise. We observe that images from different diffusion models share similar noise patterns, distinct from genuine images. Building upon this insight, we introduce a novel Noise-Aware Self-Attention (NASA) module that focuses on noise regions to capture anomalous patterns. To implement a SOTA detection model, we incorporate NASA into Swin Transformer, forming an novel detection architecture NASA-Swin. Additionally, we employ a cross-modality fusion embedding to combine RGB and noise images, along with a channel mask strategy to enhance feature learning from both modalities. Extensive experiments demonstrate the effectiveness of our approach in enhancing detection capabilities for diffusion-generated images. When encountering unseen generation methods, our approach achieves the state-of-the-art performance.
Problem

Research questions and friction points this paper is trying to address.

Detecting images generated by unseen diffusion models
Identifying noise patterns in diffusion-generated images
Improving detection with noise-aware attention and cross-modality fusion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Noise-Aware Self-Attention module for anomaly detection
NASA-Swin architecture combining NASA with Swin Transformer
Cross-modality fusion of RGB and noise images
πŸ”Ž Similar Papers
No similar papers found.
W
Weinan Guan
School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China and New Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences (CASIA), Beijing 100190, China
W
Wei Wang
New Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences (CASIA), Beijing 100190, China
B
Bo Peng
New Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences (CASIA), Beijing 100190, China
Ziwen He
Ziwen He
Nanjing University of Information Sciences and Technology
J
Jing Dong
New Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences (CASIA), Beijing 100190, China
H
Haonan Cheng
State Key Laboratory of Media Convergence and Communication, Communication University of China, Beijing 100024, China