Seeing Before Reasoning: A Unified Framework for Generalizable and Explainable Fake Image Detection

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address weak perceptual capability—caused by misaligned visual encoder objectives—and catastrophic forgetting—induced by distributional shift during fine-tuning—in multimodal large language models (MLLMs) for AI-generated image detection, this paper proposes a “see-first-then-reason” paradigm. Methodologically, we introduce Forensic-Chat, an MLLM enhanced via artifact-aware visual pretraining and diverse instruction tuning to strengthen low-level forensic trace perception. We further construct ExplainFake-Bench, the first benchmark supporting explainability-aware evaluation. Experiments demonstrate substantial improvements in cross-model and cross-scenario detection accuracy and explanation reliability. Our approach effectively mitigates reliance on linguistic shortcuts, preserves pretrained visual knowledge, and enables multi-turn interactive detection—all while maintaining rigorous interpretability standards.

Technology Category

Application Category

📝 Abstract
Detecting AI-generated images with multimodal large language models (MLLMs) has gained increasing attention, due to their rich world knowledge, common-sense reasoning, and potential for explainability. However, naively applying those MLLMs for detection often leads to suboptimal performance. We argue that the root of this failure lies in a fundamental mismatch: MLLMs are asked to reason about fakes before they can truly see them. First, they do not really see: existing MLLMs' vision encoders are primarily optimized for semantic-oriented recognition rather than the perception of low-level signals, leaving them insensitive to subtle forgery traces. Without access to reliable perceptual evidence, the model grounds its judgment on incomplete and limited visual observations. Second, existing finetuning data for detection typically uses narrow, instruction-style formats, which diverge sharply from the diverse, heterogeneous distributions seen in pretraining. In the absence of meaningful visual cues, the model therefore exploits these linguistic shortcuts, resulting in catastrophic forgetting of pretrained knowledge (even the basic dialogue capabilities). In response, we advocate for a new paradigm: seeing before reasoning. We propose that MLLMs should first be trained to perceive artifacts-strengthening their artifact-aware visual perception-so that subsequent reasoning is grounded in actual observations. We therefore propose Forensic-Chat, a generalizable, explainable, and still-conversational (for multi-round dialogue) assistant for fake image detection. We also propose ExplainFake-Bench, a benchmark tailored for the evaluation of the MLLM's explainability for image forensics from five key aspects. Extensive experiments show its superiority of generalization and genuinely reliable explainability.
Problem

Research questions and friction points this paper is trying to address.

Detecting AI-generated images using multimodal language models
Addressing mismatch between visual perception and reasoning
Improving generalization and explainability in fake detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Strengthens artifact-aware visual perception in MLLMs
Introduces Forensic-Chat for conversational fake detection
Proposes ExplainFake-Bench benchmark for explainability evaluation
🔎 Similar Papers
No similar papers found.
Kaiqing Lin
Kaiqing Lin
Shenzhen University
Multimedia ForensicsMultimedia SecuritySteganalysis
Z
Zhiyuan Yan
Tencent Youtu Lab
R
Ruoxin Chen
Tencent Youtu Lab
Junyan Ye
Junyan Ye
SYSU
Computer Vision and Deep Learning
Ke-Yue Zhang
Ke-Yue Zhang
Tencent YouTu
facedeep-learning
Y
Yue Zhou
Guangdong Provincial Key Laboratory of Intelligent Information Processing, Shenzhen Key Laboratory of Media Security, and SZU-AFS Joint Innovation Center for AI Technology, Shenzhen University
P
Peng Jin
Peking University
B
Bin Li
Guangdong Provincial Key Laboratory of Intelligent Information Processing, Shenzhen Key Laboratory of Media Security, and SZU-AFS Joint Innovation Center for AI Technology, Shenzhen University
Taiping Yao
Taiping Yao
Tencent
face anti-spoofing;deepfake;adversial attack
S
Shouhong Ding
Tencent Youtu Lab