🤖 AI Summary
Existing general-purpose deepfake detection methods for VIPs and other identity-specific faces neglect authentic facial priors, resulting in suboptimal accuracy and limited interpretability.
Method: We propose the first identity-customized, interpretable detection framework, featuring an identity-specific semantic reasoning mechanism, a multi-stage identity-discriminative learning paradigm, and the first identity-aware benchmark—VIPBench. Our approach integrates fine-tuned multimodal large language models (MLLMs), identity-level discriminative learning, structured facial representation modeling, and interpretable reasoning.
Contribution/Results: Evaluated on VIPBench, our method achieves state-of-the-art performance against seven face-swap and seven full-face synthesis attacks. It demonstrates superior accuracy, strong robustness to diverse manipulations, and fine-grained identity awareness—enabling both reliable detection and human-understandable, identity-specific explanations.
📝 Abstract
Securing personal identity against deepfake attacks is increasingly critical in the digital age, especially for celebrities and political figures whose faces are easily accessible and frequently targeted. Most existing deepfake detection methods focus on general-purpose scenarios and often ignore the valuable prior knowledge of known facial identities, e.g.,"VIP individuals"whose authentic facial data are already available. In this paper, we propose extbf{VIPGuard}, a unified multimodal framework designed to capture fine-grained and comprehensive facial representations of a given identity, compare them against potentially fake or similar-looking faces, and reason over these comparisons to make accurate and explainable predictions. Specifically, our framework consists of three main stages. First, fine-tune a multimodal large language model (MLLM) to learn detailed and structural facial attributes. Second, we perform identity-level discriminative learning to enable the model to distinguish subtle differences between highly similar faces, including real and fake variations. Finally, we introduce user-specific customization, where we model the unique characteristics of the target face identity and perform semantic reasoning via MLLM to enable personalized and explainable deepfake detection. Our framework shows clear advantages over previous detection works, where traditional detectors mainly rely on low-level visual cues and provide no human-understandable explanations, while other MLLM-based models often lack a detailed understanding of specific face identities. To facilitate the evaluation of our method, we built a comprehensive identity-aware benchmark called extbf{VIPBench} for personalized deepfake detection, involving the latest 7 face-swapping and 7 entire face synthesis techniques for generation.