FakeShield: Explainable Image Forgery Detection and Localization via Multi-modal Large Language Models

📅 2024-10-03
🏛️ arXiv.org
📈 Citations: 14
Influential: 2
📄 PDF
🤖 AI Summary
Existing image forgery detection and localization methods suffer from poor interpretability and weak generalization across diverse manipulation types, especially amid the surge of generative-AI–driven image forgery. Method: This paper introduces the novel task of Interpretable Forgery Detection and Localization (IFDL), proposes the first benchmark dataset—MMTD-Set—featuring multimodal tampering descriptions, and designs two core modules: a domain-label-guided interpretable detection module (DTE-FDM) and a text-guided forgery localization module (MFLM), jointly modeling vision-language semantics and generating pixel-level masks. Contribution/Results: Our approach achieves state-of-the-art performance across Photoshop manipulations, DeepFakes, and AIGC-edited images. It simultaneously outputs image-level authenticity scores, pixel-level localization maps, and natural-language–driven fine-grained reasoning justifications—significantly enhancing model trustworthiness and human interpretability.

Technology Category

Application Category

📝 Abstract
The rapid development of generative AI is a double-edged sword, which not only facilitates content creation but also makes image manipulation easier and more difficult to detect. Although current image forgery detection and localization (IFDL) methods are generally effective, they tend to face two challenges: extbf{1)} black-box nature with unknown detection principle, extbf{2)} limited generalization across diverse tampering methods (e.g., Photoshop, DeepFake, AIGC-Editing). To address these issues, we propose the explainable IFDL task and design FakeShield, a multi-modal framework capable of evaluating image authenticity, generating tampered region masks, and providing a judgment basis based on pixel-level and image-level tampering clues. Additionally, we leverage GPT-4o to enhance existing IFDL datasets, creating the Multi-Modal Tamper Description dataSet (MMTD-Set) for training FakeShield's tampering analysis capabilities. Meanwhile, we incorporate a Domain Tag-guided Explainable Forgery Detection Module (DTE-FDM) and a Multi-modal Forgery Localization Module (MFLM) to address various types of tamper detection interpretation and achieve forgery localization guided by detailed textual descriptions. Extensive experiments demonstrate that FakeShield effectively detects and localizes various tampering techniques, offering an explainable and superior solution compared to previous IFDL methods.
Problem

Research questions and friction points this paper is trying to address.

Detect and localize image forgeries explainably
Overcome black-box nature in forgery detection methods
Improve generalization across diverse tampering techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal framework for explainable forgery detection
GPT-4o enhanced dataset for tampering analysis
Domain Tag-guided detection and localization modules
🔎 Similar Papers
No similar papers found.
Z
Zhipei Xu
School of Electronic and Computer Engineering, Peking University; Peking University Shenzhen Graduate School-Rabbitpre AIGC Joint Research Laboratory
X
Xuanyu Zhang
School of Electronic and Computer Engineering, Peking University
Runyi Li
Runyi Li
Peking University
Low-level VisionTrustworthy AI
Z
Zecheng Tang
School of Electronic and Computer Engineering, Peking University
Qing Huang
Qing Huang
Chinese Academy of Science
Material Editing
J
Jian Zhang
School of Electronic and Computer Engineering, Peking University; Peking University Shenzhen Graduate School-Rabbitpre AIGC Joint Research Laboratory