🤖 AI Summary
Addressing the critical security challenge of AI-generated image detection, this paper proposes a zero-shot detection method grounded in predictive uncertainty. The core insight is that AI-generated images induce significantly higher predictive uncertainty—quantified via Monte Carlo Dropout, entropy, or confidence scores—on pretrained vision models (e.g., ViT, CLIP), whereas natural images yield more stable predictions. Crucially, this work is the first to directly leverage uncertainty as the discriminative signal, eliminating the need for training task-specific discriminators, fine-tuning backbone models, or relying on synthetic training data. Evaluated across multiple benchmarks—including ForenSynths and WildVision—the method achieves an average AUC of 96.2%, demonstrating strong generalization, plug-and-play usability, and state-of-the-art performance with minimal deployment overhead.
📝 Abstract
In this work, we propose a novel approach for detecting AI-generated images by leveraging predictive uncertainty to mitigate misuse and associated risks. The motivation arises from the fundamental assumption regarding the distributional discrepancy between natural and AI-generated images. The feasibility of distinguishing natural images from AI-generated ones is grounded in the distribution discrepancy between them. Predictive uncertainty offers an effective approach for capturing distribution shifts, thereby providing insights into detecting AI-generated images. Namely, as the distribution shift between training and testing data increases, model performance typically degrades, often accompanied by increased predictive uncertainty. Therefore, we propose to employ predictive uncertainty to reflect the discrepancies between AI-generated and natural images. In this context, the challenge lies in ensuring that the model has been trained over sufficient natural images to avoid the risk of determining the distribution of natural images as that of generated images. We propose to leverage large-scale pre-trained models to calculate the uncertainty as the score for detecting AI-generated images. This leads to a simple yet effective method for detecting AI-generated images using large-scale vision models: images that induce high uncertainty are identified as AI-generated. Comprehensive experiments across multiple benchmarks demonstrate the effectiveness of our method.