🤖 AI Summary
Existing methods for detecting AI-generated images often lack interpretability and rely on implicit assumptions about synthetic artifacts, limiting their robustness under distributional shifts. This work proposes a training-free detection framework that leverages only the statistical properties of authentic images. By integrating multiple untrained statistical descriptors and applying p-value computation combined with classical statistical ensembling techniques—such as Fisher’s method—it constructs an interpretable probabilistic scoring system to assess the consistency of a given image with the distribution of real data. To our knowledge, this is the first approach to establish a universal detection mechanism grounded entirely in the statistics of genuine images, demonstrating strong robustness and flexibility across diverse generative models and cross-domain scenarios.
📝 Abstract
As generative models continue to evolve, detecting AI-generated images remains a critical challenge. While effective detection methods exist, they often lack formal interpretability and may rely on implicit assumptions about fake content, potentially limiting robustness to distributional shifts. In this work, we introduce a rigorous, statistically grounded framework for fake image detection that focuses on producing a probability score interpretable with respect to the real-image population. Our method leverages the strengths of multiple existing detectors by combining training-free statistics. We compute p-values over a range of test statistics and aggregate them using classical statistical ensembling to assess alignment with the unified real-image distribution. This framework is generic, flexible, and training-free, making it well-suited for robust fake image detection across diverse and evolving settings.