π€ AI Summary
AI-generated images are increasingly difficult for humans to distinguish visually, and existing detection methods lack cognitive interpretability. Method: This paper proposes the first explainable image authenticity detection framework integrating multimodal large language models (MLLMs) with a forensic prompting mechanism. It designs forensic prompts targeting typical forgery artifacts to guide MLLMs toward salient manipulation cues; constructs the ForgeryReason datasetβa collection of evidence descriptions generated via LLM-agent-assisted human refinement; and validates explanation plausibility through human subjective evaluation. Contribution/Results: Experiments demonstrate that the framework achieves high detection accuracy and strong generalization across two major benchmarks. With only minimal human annotation effort, it significantly improves both explanation accuracy and human comprehensibility, effectively bridging the gap between automated detection and forensic cognitive analysis.
π Abstract
Advances in generative models have led to AI-generated images visually indistinguishable from authentic ones. Despite numerous studies on detecting AI-generated images with classifiers, a gap persists between such methods and human cognitive forensic analysis. We present ForenX, a novel method that not only identifies the authenticity of images but also provides explanations that resonate with human thoughts. ForenX employs the powerful multimodal large language models (MLLMs) to analyze and interpret forensic cues. Furthermore, we overcome the limitations of standard MLLMs in detecting forgeries by incorporating a specialized forensic prompt that directs the MLLMs attention to forgery-indicative attributes. This approach not only enhance the generalization of forgery detection but also empowers the MLLMs to provide explanations that are accurate, relevant, and comprehensive. Additionally, we introduce ForgReason, a dataset dedicated to descriptions of forgery evidences in AI-generated images. Curated through collaboration between an LLM-based agent and a team of human annotators, this process provides refined data that further enhances our model's performance. We demonstrate that even limited manual annotations significantly improve explanation quality. We evaluate the effectiveness of ForenX on two major benchmarks. The model's explainability is verified by comprehensive subjective evaluations.