🤖 AI Summary
Existing AI-generated image detection methods achieve high accuracy but lack interpretability, while multimodal large language models (MLLMs) often suffer from visual hallucinations in forgery detection, yielding explanations inconsistent with image content and human reasoning. To address this, we propose a vision–language joint reasoning framework that unifies detection, artifact localization, and natural-language explanation for the first time. Our key contributions are: (1) constructing the first AI-image annotation dataset featuring bounding-box-level artifact localization and fine-grained textual defect descriptions; (2) designing a multi-stage collaborative supervised fine-tuning strategy integrating visual grounding reasoning, artifact localization learning, and explanation generation; and (3) achieving state-of-the-art performance across detection accuracy, localization precision, and explanation consistency—significantly enhancing model trustworthiness and human interpretability.
📝 Abstract
The rapid advancement of image generation technologies intensifies the demand for interpretable and robust detection methods. Although existing approaches often attain high accuracy, they typically operate as black boxes without providing human-understandable justifications. Multi-modal Large Language Models (MLLMs), while not originally intended for forgery detection, exhibit strong analytical and reasoning capabilities. When properly fine-tuned, they can effectively identify AI-generated images and offer meaningful explanations. However, existing MLLMs still struggle with hallucination and often fail to align their visual interpretations with actual image content and human reasoning. To bridge this gap, we construct a dataset of AI-generated images annotated with bounding boxes and descriptive captions that highlight synthesis artifacts, establishing a foundation for human-aligned visual-textual grounded reasoning. We then finetune MLLMs through a multi-stage optimization strategy that progressively balances the objectives of accurate detection, visual localization, and coherent textual explanation. The resulting model achieves superior performance in both detecting AI-generated images and localizing visual flaws, significantly outperforming baseline methods.