Think Twice before Adaptation: Improving Adaptability of DeepFake Detection via Online Test-Time Adaptation

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deepfake detectors suffer significant performance degradation under test-time distribution shifts, such as post-processing manipulations. To address this, we propose T²A—a source-data-free and label-free online test-time adaptation method. Our approach introduces three key innovations: (1) an uncertainty-aware negative learning objective that replaces conventional entropy minimization; (2) an uncertainty-prioritized sample selection strategy coupled with a gradient masking mechanism, theoretically proven to complement entropy minimization; and (3) importance-based sample reweighting to enhance robustness. Evaluated across diverse distribution shifts—including compression, blurring, and noise injection—and multiple post-processing attacks, T²A achieves state-of-the-art performance on mainstream benchmarks (e.g., FaceForensics++, Celeb-DF, and DFDC). It significantly improves inference robustness and cross-domain generalization of deepfake detectors without requiring access to source data or ground-truth labels during adaptation.

Technology Category

Application Category

📝 Abstract
Deepfake (DF) detectors face significant challenges when deployed in real-world environments, particularly when encountering test samples deviated from training data through either postprocessing manipulations or distribution shifts. We demonstrate postprocessing techniques can completely obscure generation artifacts presented in DF samples, leading to performance degradation of DF detectors. To address these challenges, we propose Think Twice before Adaptation ( exttt{T$^2$A}), a novel online test-time adaptation method that enhances the adaptability of detectors during inference without requiring access to source training data or labels. Our key idea is to enable the model to explore alternative options through an Uncertainty-aware Negative Learning objective rather than solely relying on its initial predictions as commonly seen in entropy minimization (EM)-based approaches. We also introduce an Uncertain Sample Prioritization strategy and Gradients Masking technique to improve the adaptation by focusing on important samples and model parameters. Our theoretical analysis demonstrates that the proposed negative learning objective exhibits complementary behavior to EM, facilitating better adaptation capability. Empirically, our method achieves state-of-the-art results compared to existing test-time adaptation (TTA) approaches and significantly enhances the resilience and generalization of DF detectors during inference. Code is available href{https://github.com/HongHanh2104/T2A-Think-Twice-Before-Adaptation}{here}.
Problem

Research questions and friction points this paper is trying to address.

Enhancing DeepFake detector adaptability to real-world test samples
Addressing performance degradation from postprocessing and distribution shifts
Improving inference resilience without source data or labels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online test-time adaptation without source data
Uncertainty-aware Negative Learning objective
Uncertain Sample Prioritization and Gradients Masking
🔎 Similar Papers
No similar papers found.
H
Hong-Hanh Nguyen-Le
School of Computer Science, University College Dublin
V
Van-Tuan Tran
School of Computer Science and Statistics, Trinity College Dublin
D
Dinh-Thuc Nguyen
University of Science, Ho Chi Minh City, Vietnam
Nhien-An Le-Khac
Nhien-An Le-Khac
Associate Professor of Digital Forensics and Cyber Security, University College Dublin
Digital ForensicsCybersecurityAI SecurityAI ForensicsKnowledge Engineering