Rethinking Vision-Language Model in Face Forensics: Multi-Modal Interpretable Forged Face Detector

πŸ“… 2025-03-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Addressing the trade-off between classification accuracy and interpretability in deepfake detection, this paper proposes M2F2-Det, an end-to-end multimodal detector that uniquely integrates CLIP’s visual representations with large language model (LLM) reasoning capabilities to jointly produce binary authenticity decisions and natural-language explanations. Methodologically: (1) CLIP enables cross-modal feature alignment between visual inputs and textual prompts; (2) a forgery-aware prompt learning mechanism is designed to elicit discriminative cues specific to facial manipulation artifacts; (3) the LLM generates semantically coherent, verifiable textual justifications grounded in visual evidence. M2F2-Det achieves state-of-the-art performance on both detection accuracy and explanation quality across multiple benchmarks, significantly improving cross-domain generalization and human interpretability. By unifying detection and explanation within a single unified framework, it establishes a novel paradigm for trustworthy, human-centered AI in deepfake governance.

Technology Category

Application Category

πŸ“ Abstract
Deepfake detection is a long-established research topic vital for mitigating the spread of malicious misinformation. Unlike prior methods that provide either binary classification results or textual explanations separately, we introduce a novel method capable of generating both simultaneously. Our method harnesses the multi-modal learning capability of the pre-trained CLIP and the unprecedented interpretability of large language models (LLMs) to enhance both the generalization and explainability of deepfake detection. Specifically, we introduce a multi-modal face forgery detector (M2F2-Det) that employs tailored face forgery prompt learning, incorporating the pre-trained CLIP to improve generalization to unseen forgeries. Also, M2F2-Det incorporates an LLM to provide detailed textual explanations of its detection decisions, enhancing interpretability by bridging the gap between natural language and subtle cues of facial forgeries. Empirically, we evaluate M2F2-Det on both detection and explanation generation tasks, where it achieves state-of-the-art performance, demonstrating its effectiveness in identifying and explaining diverse forgeries.
Problem

Research questions and friction points this paper is trying to address.

Develop multi-modal deepfake detector with explanations
Improve generalization using CLIP for unseen forgeries
Enhance interpretability via LLM-generated textual explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal learning with CLIP for generalization
LLM integration for interpretable textual explanations
Tailored face forgery prompt learning technique
πŸ”Ž Similar Papers
No similar papers found.
X
Xiao Guo
Michigan State University
Xiufeng Song
Xiufeng Song
Shanghai Jiao Tong University
Computer VisionEmbodied Intelligence
Y
Yue Zhang
Michigan State University
X
Xiaohong Liu
Shanghai Jiao Tong University
X
Xiaoming Liu
Michigan State University