AuthGuard: Generalizable Deepfake Detection via Language Guidance

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor generalization of deepfake detection methods—particularly their inability to identify unseen generation techniques—this paper proposes a language-guided universal detection paradigm. We leverage multimodal large language models (MLLMs) to generate commonsense textual supervision via few-shot prompting, which guides a vision encoder to jointly model statistical artifacts and logical/perceptual inconsistencies. To enhance robustness in contrastive learning between image and text modalities, we incorporate data uncertainty modeling, enabling interpretable reasoning. Our core contribution is the first language-vision co-designed universal detection framework, overcoming the limitations of conventional approaches that rely solely on generation-specific artifacts. Experimental results demonstrate significant improvements: +6.15% and +16.68% AUC on DFDC and DF40 benchmarks, respectively, and +24.69% accuracy on the DDVQA reasoning task—substantially enhancing out-of-distribution generalization and interpretability.

Technology Category

Application Category

📝 Abstract
Existing deepfake detection techniques struggle to keep-up with the ever-evolving novel, unseen forgeries methods. This limitation stems from their reliance on statistical artifacts learned during training, which are often tied to specific generation processes that may not be representative of samples from new, unseen deepfake generation methods encountered at test time. We propose that incorporating language guidance can improve deepfake detection generalization by integrating human-like commonsense reasoning -- such as recognizing logical inconsistencies and perceptual anomalies -- alongside statistical cues. To achieve this, we train an expert deepfake vision encoder by combining discriminative classification with image-text contrastive learning, where the text is generated by generalist MLLMs using few-shot prompting. This allows the encoder to extract both language-describable, commonsense deepfake artifacts and statistical forgery artifacts from pixel-level distributions. To further enhance robustness, we integrate data uncertainty learning into vision-language contrastive learning, mitigating noise in image-text supervision. Our expert vision encoder seamlessly interfaces with an LLM, further enabling more generalized and interpretable deepfake detection while also boosting accuracy. The resulting framework, AuthGuard, achieves state-of-the-art deepfake detection accuracy in both in-distribution and out-of-distribution settings, achieving AUC gains of 6.15% on the DFDC dataset and 16.68% on the DF40 dataset. Additionally, AuthGuard significantly enhances deepfake reasoning, improving performance by 24.69% on the DDVQA dataset.
Problem

Research questions and friction points this paper is trying to address.

Detecting novel deepfakes beyond training artifacts
Combining language guidance with statistical cues for generalization
Improving detection robustness via vision-language uncertainty learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines image-text contrastive learning with deepfake detection
Integrates data uncertainty learning for robust vision-language contrast
Interfaces vision encoder with LLM for interpretable detection
🔎 Similar Papers
No similar papers found.