🤖 AI Summary
Existing DeepFake detectors overly rely on facial regions, limiting their effectiveness against full-frame manipulations and text-to-video (T2V) or image-to-video (I2V) entirely synthetic videos. To address this, we propose UNITE, a universal video forgery detector that breaks the face-centric paradigm. UNITE establishes the first unified detection framework covering facial manipulation, background editing, and end-to-end generative content. It introduces an Attention Diversity (AD) loss to explicitly suppress facial bias and enhance spatial attention generalization. Leveraging domain-agnostic video features extracted by SigLIP-So400M and a Transformer-based architecture, UNITE jointly optimizes the AD loss and cross-entropy loss using heterogeneous multi-source data. Evaluated across diverse benchmarks—including facial tampering, background editing, and T2V/I2V synthetic videos—UNITE consistently outperforms state-of-the-art methods, demonstrating superior cross-scenario adaptability and generalization performance.
📝 Abstract
Existing DeepFake detection techniques primarily focus on facial manipulations, such as face-swapping or lip-syncing. However, advancements in text-to-video (T2V) and image-to-video (I2V) generative models now allow fully AI-generated synthetic content and seamless background alterations, challenging face-centric detection methods and demanding more versatile approaches.To address this, we introduce the Universal Network for Identifying Tampered and synthEtic videos (UNITE) model, which, unlike traditional detectors, captures full-frame manipulations. UNITE extends detection capabilities to scenarios without faces, non-human subjects, and complex background modifications. It leverages a transformer-based architecture that processes domain-agnostic features extracted from videos via the SigLIP-So400M foundation model. Given limited datasets encompassing both facial/background alterations and T2V/I2V content, we integrate task-irrelevant data alongside standard DeepFake datasets in training. We further mitigate the model’s tendency to over-focus on faces by incorporating an attention-diversity (AD) loss, which promotes diverse spatial attention across video frames. Combining AD loss with cross-entropy improves detection performance across varied contexts. Comparative evaluations demonstrate that UNITE outperforms state-of-the-art detectors on datasets featuring face/background manipulations and fully synthetic T2V/I2V videos, showcasing its adaptability and generalizable detection capabilities.