🤖 AI Summary
AI-generated face detection suffers from poor generalization, particularly against unseen generative models. Method: This paper proposes a zero-shot self-supervised anomaly detection framework that leverages only authentic face images—without any synthetic samples—to jointly model camera intrinsics (inferred via EXIF tag sequential ordering) and face-specific features through multi-task pretext learning. Subsequently, a Gaussian Mixture Model (GMM) is introduced for the first time in the learned feature space to perform distribution-level anomaly detection. Contribution/Results: Unlike supervised approaches, our method achieves significantly improved zero-shot detection performance across diverse, unseen generators. It reliably identifies unknown AI-generated faces without requiring any forged training data. Extensive qualitative and quantitative experiments demonstrate its robustness, interpretability, and strong generalization capability.
📝 Abstract
The detection of AI-generated faces is commonly approached as a binary classification task. Nevertheless, the resulting detectors frequently struggle to adapt to novel AI face generators, which evolve rapidly. In this paper, we describe an anomaly detection method for AI-generated faces by leveraging self-supervised learning of camera-intrinsic and face-specific features purely from photographic face images. The success of our method lies in designing a pretext task that trains a feature extractor to rank four ordinal exchangeable image file format (EXIF) tags and classify artificially manipulated face images. Subsequently, we model the learned feature distribution of photographic face images using a Gaussian mixture model. Faces with low likelihoods are flagged as AI-generated. Both quantitative and qualitative experiments validate the effectiveness of our method. Our code is available at url{https://github.com/MZMMSEC/AIGFD_EXIF.git}.