🤖 AI Summary
Existing methods for detecting images generated by diffusion models fail to distinguish between aleatoric and epistemic uncertainty, limiting their discriminative performance and generalization capability. This work addresses this limitation by explicitly leveraging epistemic uncertainty for detection—a first in the field—and proposes a Laplace approximation–based approach to estimate epistemic uncertainty in diffusion models. Furthermore, an asymmetric loss function with a large margin is introduced to emphasize the most discriminative components of reconstruction error. The proposed method achieves state-of-the-art performance across multiple large-scale benchmarks and demonstrates significantly improved generalization in detecting images synthesized by previously unseen diffusion models.
📝 Abstract
The rapid progress of diffusion models highlights the growing need for detecting generated images. Previous research demonstrates that incorporating diffusion-based measurements, such as reconstruction error, can enhance the generalizability of detectors. However, ignoring the differing impacts of aleatoric and epistemic uncertainty on reconstruction error can undermine detection performance. Aleatoric uncertainty, arising from inherent data noise, creates ambiguity that impedes accurate detection of generated images. As it reflects random variations within the data (e.g., noise in natural textures), it does not help distinguish generated images. In contrast, epistemic uncertainty, which represents the model's lack of knowledge about unfamiliar patterns, supports detection. In this paper, we propose a novel framework, Diffusion Epistemic Uncertainty with Asymmetric Learning~(DEUA), for detecting diffusion-generated images. We introduce Diffusion Epistemic Uncertainty~(DEU) estimation via the Laplace approximation to assess the proximity of data to the manifold of diffusion-generated samples. Additionally, an asymmetric loss function is introduced to train a balanced classifier with larger margins, further enhancing generalizability. Extensive experiments on large-scale benchmarks validate the state-of-the-art performance of our method.