🤖 AI Summary
To address poor generalization and low data efficiency in generative image forgery detection, this paper proposes GFF—a lightweight, highly efficient framework that freezes the CLIP-ViT backbone. Our method introduces two key innovations: (1) the Deep Forgery-specific Feature Guidance Module (DFGM), the first task-aware module for distilling forgery-discriminative features; and (2) FuseFormer, a multi-stage ViT feature fusion architecture that hierarchically integrates cross-layer representations via concatenation and self-attention. Crucially, the ViT backbone remains entirely frozen, requiring only five fine-tuning epochs to achieve state-of-the-art performance. GFF attains 99% and 97% detection accuracy on unseen GAN- and diffusion-based forgeries, respectively—significantly outperforming existing frozen-ViT approaches. The framework demonstrates exceptional generalization across diverse generative models and achieves remarkable data efficiency, setting a new benchmark for practical, scalable forgery detection.
📝 Abstract
The rise of generative models has sparked concerns about image authenticity online, highlighting the urgent need for an effective and general detector. Recent methods leveraging the frozen pre-trained CLIP-ViT model have made great progress in deepfake detection. However, these models often rely on visual-general features directly extracted by the frozen network, which contain excessive information irrelevant to the task, resulting in limited detection performance. To address this limitation, in this paper, we propose an efficient Guided and Fused Frozen CLIP-ViT (GFF), which integrates two simple yet effective modules. The Deepfake-Specific Feature Guidance Module (DFGM) guides the frozen pre-trained model in extracting features specifically for deepfake detection, reducing irrelevant information while preserving its generalization capabilities. The Multi-Stage Fusion Module (FuseFormer) captures low-level and high-level information by fusing features extracted from each stage of the ViT. This dual-module approach significantly improves deepfake detection by fully leveraging CLIP-ViT's inherent advantages. Extensive experiments demonstrate the effectiveness and generalization ability of GFF, which achieves state-of-the-art performance with optimal results in only 5 training epochs. Even when trained on only 4 classes of ProGAN, GFF achieves nearly 99% accuracy on unseen GANs and maintains an impressive 97% accuracy on unseen diffusion models.