🤖 AI Summary
Current face forgery localization methods lack interpretability, producing only binary segmentation masks without revealing the semantic causes or severity levels of manipulations. To address this, we introduce MMTT—the first large-scale multimodal interpretable benchmark for face forgery localization—comprising 128K image-text pairs with human-annotated, fine-grained descriptions of forgery artifacts. We propose ForgeryTalker, a novel architecture that jointly models pixel-level localization and natural language explanation through three key components: multimodal large language model (MLLM) fine-tuning, a forgery-clue prompting network, and a region-aware prompting mechanism. This enables unified output of precise spatial masks and human-understandable semantic explanations. On MMTT, ForgeryTalker achieves a 12.3% improvement in localization accuracy and significantly outperforms baselines in explanation fidelity and semantic consistency. All code, data, and models are publicly released to advance standardized, interpretable deepfake detection research.
📝 Abstract
Image forgery localization, which centers on identifying tampered pixels within an image, has seen significant advancements. Traditional approaches often model this challenge as a variant of image segmentation, treating the binary segmentation of forged areas as the end product. We argue that the basic binary forgery mask is inadequate for explaining model predictions. It doesn't clarify why the model pinpoints certain areas and treats all forged pixels the same, making it hard to spot the most fake-looking parts. In this study, we mitigate the aforementioned limitations by generating salient region-focused interpretation for the forgery images. To support this, we craft a Multi-Modal Tramper Tracing (MMTT) dataset, comprising facial images manipulated using deepfake techniques and paired with manual, interpretable textual annotations. To harvest high-quality annotation, annotators are instructed to meticulously observe the manipulated images and articulate the typical characteristics of the forgery regions. Subsequently, we collect a dataset of 128,303 image-text pairs. Leveraging the MMTT dataset, we develop ForgeryTalker, an architecture designed for concurrent forgery localization and interpretation. ForgeryTalker first trains a forgery prompter network to identify the pivotal clues within the explanatory text. Subsequently, the region prompter is incorporated into multimodal large language model for finetuning to achieve the dual goals of localization and interpretation. Extensive experiments conducted on the MMTT dataset verify the superior performance of our proposed model. The dataset, code as well as pretrained checkpoints will be made publicly available to facilitate further research and ensure the reproducibility of our results.