Task-aligned prompting improves zero-shot detection of AI-generated images by Vision-Language Models

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor generalizability and heavy reliance on labeled data of supervised methods for AI-generated image detection, this paper proposes zero-shot task-aligned prompting (zero-shot-s²), which activates the implicit detection capability of pre-trained vision-language models (VLMs)—e.g., CLIP and FLAVA—without fine-tuning. Our method employs style- and synthesis-artifact-guided instruction design coupled with a self-consistency ensembling mechanism, enabling robust cross-generator and cross-content-domain (faces/objects/animals) detection. Key contributions include: (1) introducing the first task-aligned prompting paradigm for forgery detection; (2) the first empirical validation of self-consistency in visual deepfake detection; and (3) revealing that prompt engineering can elicit training-free, interpretable detection capabilities from VLMs. Experiments demonstrate consistent improvements—8–29% gains in Macro F1—across 16 generators and three new benchmarks, significantly outperforming chain-of-thought baselines and supporting multi-scale VLMs.

Technology Category

Application Category

📝 Abstract
As image generators produce increasingly realistic images, concerns about potential misuse continue to grow. Supervised detection relies on large, curated datasets and struggles to generalize across diverse generators. In this work, we investigate the use of pre-trained Vision-Language Models (VLMs) for zero-shot detection of AI-generated images. While off-the-shelf VLMs exhibit some task-specific reasoning and chain-of-thought prompting offers gains, we show that task-aligned prompting elicits more focused reasoning and significantly improves performance without fine-tuning. Specifically, prefixing the model's response with the phrase ``Let's examine the style and the synthesis artifacts'' -- a method we call zero-shot-s$^2$ -- boosts Macro F1 scores by 8%-29% for two widely used open-source models. These gains are consistent across three recent, diverse datasets spanning human faces, objects, and animals with images generated by 16 different models -- demonstrating strong generalization. We further evaluate the approach across three additional model sizes and observe improvements in most dataset-model combinations -- suggesting robustness to model scale. Surprisingly, self-consistency, a behavior previously observed in language reasoning, where aggregating answers from diverse reasoning paths improves performance, also holds in this setting. Even here, zero-shot-s$^2$ scales better than chain-of-thought in most cases -- indicating that it elicits more useful diversity. Our findings show that task-aligned prompts elicit more focused reasoning and enhance latent capabilities in VLMs, like the detection of AI-generated images -- offering a simple, generalizable, and explainable alternative to supervised methods. Our code is publicly available on github: https://github.com/osome-iu/Zero-shot-s2.git.
Problem

Research questions and friction points this paper is trying to address.

Detect AI-generated images without labeled training data
Improve zero-shot detection using task-aligned prompting
Generalize across diverse image generators and datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Task-aligned prompting enhances zero-shot detection
Zero-shot-s2 boosts performance without fine-tuning
Method generalizes across diverse datasets and models
🔎 Similar Papers
No similar papers found.
Z
Zoher Kachwala
Observatory on Social Media, Indiana University, Bloomington, USA
D
Danishjeet Singh
Observatory on Social Media, Indiana University, Bloomington, USA
D
Danielle Yang
Observatory on Social Media, Indiana University, Bloomington, USA
Filippo Menczer
Filippo Menczer
Luddy Distinguished Professor of Informatics and Computer Science, Indiana University
MisinformationWeb ScienceNetwork ScienceComputational Social ScienceSocial Media