Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture

📅 2024-08-23
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
To address weak generalization and inefficient domain adaptation in open-set face forgery detection, this paper proposes a lightweight cross-domain adaptive method based on statistical modeling of forgery-style discrepancies. The approach tackles the problem by (1) introducing a forgery-style mixing augmentation mechanism that explicitly models inter-domain variations in forged textures and statistical distributions; and (2) designing a parameter-efficient adaptation framework tailored for Vision Transformers (ViTs), which fine-tunes only lightweight modules to jointly capture global and local forensic cues while fully preserving ImageNet pre-trained knowledge. Under the open-set setting, the method achieves state-of-the-art generalization performance, reduces trainable parameters by over 90%, and significantly enhances detection robustness and edge-deployment efficiency.

Technology Category

Application Category

📝 Abstract
Open-set face forgery detection poses significant security threats and presents substantial challenges for existing detection models. These detectors primarily have two limitations: they cannot generalize across unknown forgery domains and inefficiently adapt to new data. To address these issues, we introduce an approach that is both general and parameter-efficient for face forgery detection. It builds on the assumption that different forgery source domains exhibit distinct style statistics. Previous methods typically require fully fine-tuning pre-trained networks, consuming substantial time and computational resources. In turn, we design a forgery-style mixture formulation that augments the diversity of forgery source domains, enhancing the model's generalizability across unseen domains. Drawing on recent advancements in vision transformers (ViT) for face forgery detection, we develop a parameter-efficient ViT-based detection model that includes lightweight forgery feature extraction modules and enables the model to extract global and local forgery clues simultaneously. We only optimize the inserted lightweight modules during training, maintaining the original ViT structure with its pre-trained ImageNet weights. This training strategy effectively preserves the informative pre-trained knowledge while flexibly adapting the model to the task of Deepfake detection. Extensive experimental results demonstrate that the designed model achieves state-of-the-art generalizability with significantly reduced trainable parameters, representing an important step toward open-set Deepfake detection in the wild.
Problem

Research questions and friction points this paper is trying to address.

Detecting unknown deepfake domains with limited generalization
Reducing computational cost in adapting to new forgery data
Enhancing model efficiency while preserving detection accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Forgery style mixture formulation for domain diversity
Parameter-efficient ViT model with lightweight modules
Optimizing only inserted modules while preserving pre-trained weights
🔎 Similar Papers
No similar papers found.
C
Chenqi Kong
Rapid-Rich Object Search (ROSE) Lab, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, 639798
Anwei Luo
Anwei Luo
Jiangxi University of Finance and Economics
deepfakeface forgery detectionmultimedia securityforensics
P
Peijun Bao
Rapid-Rich Object Search (ROSE) Lab, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, 639798
Haoliang Li
Haoliang Li
Department of Electrical Engineering, City University of Hong Kong
AI SecurityInformation Forensics and SecurityMachine Learning
Renjie Wan
Renjie Wan
Department of Computer Science, Hong Kong Baptist University
Digital WatermarkingAI SecurityImage Processing
Z
Zengwei Zheng
Department of Computer Science and Computing, Zhejiang University City College, Zhejiang, China
A
Anderson Rocha
Artificial Intelligence Lab. (Recod.ai) at the University of Campinas, Campinas 13084-851, Brazil
A
A. Kot
Rapid-Rich Object Search (ROSE) Lab, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, 639798