🤖 AI Summary
This work addresses the limitations of existing deepfake face detection methods, which often suffer from poor interpretability, hallucination, and insufficient capture of fine-grained details. To overcome these challenges, the authors propose a human-like reasoning framework that leverages a novel chain-of-thought dataset, CoT-Face, tailored for vision-language models to generate interpretable reasoning paths. Additionally, they introduce a forgery latent space distribution modeling module to capture high-frequency forgery cues. A self-evolving reasoning mechanism is further designed, integrating reinforcement learning to iteratively refine textual explanations in two stages, thereby enhancing explanation fidelity and mitigating hallucination. Experimental results demonstrate that the proposed approach outperforms state-of-the-art methods in detection accuracy, forgery localization precision, and cross-dataset generalization.
📝 Abstract
With the rapid advancement of AIGC technology, developing identification methods to address the security challenges posed by deepfakes has become urgent. Face forgery identification techniques can be categorized into two types: traditional classification methods and explainable VLM approaches. The former provides classification results but lacks explanatory ability, while the latter, although capable of providing coarse-grained explanations, often suffers from hallucinations and insufficient detail. To overcome these limitations, we propose EvolveReason, which mimics the reasoning and observational processes of human auditors when identifying face forgeries. By constructing a chain-of-thought dataset, CoT-Face, tailored for advanced VLMs, our approach guides the model to think in a human-like way, prompting it to output reasoning processes and judgment results. This provides practitioners with reliable analysis and helps alleviate hallucination. Additionally, our framework incorporates a forgery latent-space distribution capture module, enabling EvolveReason to identify high-frequency forgery cues difficult to extract from the original images. To further enhance the reliability of textual explanations, we introduce a self-evolution exploration strategy, leveraging reinforcement learning to allow the model to iteratively explore and optimize its textual descriptions in a two-stage process. Experimental results show that EvolveReason not only outperforms the current state-of-the-art methods in identification performance but also accurately identifies forgery details and demonstrates generalization capabilities.