🤖 AI Summary
To address poor generalization in synthetic image detection, this paper proposes a generic detection method based on multimodal semantic misalignment. Unlike conventional binary classification approaches relying solely on visual features, our method leverages the pre-trained CLIP embedding space to quantify semantic inconsistency between images and corresponding text descriptions at both global and fine-grained levels, thereby constructing a hierarchical misalignment representation. By fine-tuning an MLP classifier head and adopting a staged alignment analysis strategy, the model avoids overfitting to model-specific generation artifacts. Experiments demonstrate that our approach significantly outperforms current state-of-the-art methods on unseen generative models—including SDXL, DALL·E 3, and Stable Video Diffusion—achieving strong cross-model generalization and robustness against diverse synthetic data distributions.
📝 Abstract
With the rapid development of generative models, detecting generated fake images to prevent their malicious use has become a critical issue recently. Existing methods frame this challenge as a naive binary image classification task. However, such methods focus only on visual clues, yielding trained detectors susceptible to overfitting specific image patterns and incapable of generalizing to unseen models. In this paper, we address this issue from a multi-modal perspective and find that fake images cannot be properly aligned with corresponding captions compared to real images. Upon this observation, we propose a simple yet effective detector termed ITEM by leveraging the image-text misalignment in a joint visual-language space as discriminative clues. Specifically, we first measure the misalignment of the images and captions in pre-trained CLIP's space, and then tune a MLP head to perform the usual detection task. Furthermore, we propose a hierarchical misalignment scheme that first focuses on the whole image and then each semantic object described in the caption, which can explore both global and fine-grained local semantic misalignment as clues. Extensive experiments demonstrate the superiority of our method against other state-of-the-art competitors with impressive generalization and robustness on various recent generative models.