Guided and Fused: Efficient Frozen CLIP-ViT with Feature Guidance and Multi-Stage Feature Fusion for Generalizable Deepfake Detection

📅 2024-08-25
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address poor generalization and low data efficiency in generative image forgery detection, this paper proposes GFF—a lightweight, highly efficient framework that freezes the CLIP-ViT backbone. Our method introduces two key innovations: (1) the Deep Forgery-specific Feature Guidance Module (DFGM), the first task-aware module for distilling forgery-discriminative features; and (2) FuseFormer, a multi-stage ViT feature fusion architecture that hierarchically integrates cross-layer representations via concatenation and self-attention. Crucially, the ViT backbone remains entirely frozen, requiring only five fine-tuning epochs to achieve state-of-the-art performance. GFF attains 99% and 97% detection accuracy on unseen GAN- and diffusion-based forgeries, respectively—significantly outperforming existing frozen-ViT approaches. The framework demonstrates exceptional generalization across diverse generative models and achieves remarkable data efficiency, setting a new benchmark for practical, scalable forgery detection.

Technology Category

Application Category

📝 Abstract
The rise of generative models has sparked concerns about image authenticity online, highlighting the urgent need for an effective and general detector. Recent methods leveraging the frozen pre-trained CLIP-ViT model have made great progress in deepfake detection. However, these models often rely on visual-general features directly extracted by the frozen network, which contain excessive information irrelevant to the task, resulting in limited detection performance. To address this limitation, in this paper, we propose an efficient Guided and Fused Frozen CLIP-ViT (GFF), which integrates two simple yet effective modules. The Deepfake-Specific Feature Guidance Module (DFGM) guides the frozen pre-trained model in extracting features specifically for deepfake detection, reducing irrelevant information while preserving its generalization capabilities. The Multi-Stage Fusion Module (FuseFormer) captures low-level and high-level information by fusing features extracted from each stage of the ViT. This dual-module approach significantly improves deepfake detection by fully leveraging CLIP-ViT's inherent advantages. Extensive experiments demonstrate the effectiveness and generalization ability of GFF, which achieves state-of-the-art performance with optimal results in only 5 training epochs. Even when trained on only 4 classes of ProGAN, GFF achieves nearly 99% accuracy on unseen GANs and maintains an impressive 97% accuracy on unseen diffusion models.
Problem

Research questions and friction points this paper is trying to address.

Detect diverse unseen forgery techniques effectively
Achieve high performance with minimal training data
Focus on forgery-specific features using lightweight designs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Weight-Shared Guidance Module for feature extraction
FAFormer integrates multi-stage forgery features
Frozen network focuses on forgery-specific information
Y
Yingjian Chen
Henan Key Laboratory of Big Data Analysis and Processing, Henan University
L
Lei Zhang
Henan Key Laboratory of Big Data Analysis and Processing, Henan University
Y
Yakun Niu
Henan Key Laboratory of Big Data Analysis and Processing, Henan University
P
Pei Chen
Henan Key Laboratory of Big Data Analysis and Processing, Henan University
L
Lei Tan
Henan Key Laboratory of Big Data Analysis and Processing, Henan University
J
Jing Zhou
International Business School, Henan University