De-Fake: Style based Anomaly Deepfake Detection

📅 2025-07-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing deepfake detection methods for face-swapping—particularly those deployed in malicious contexts such as non-consensual pornography—rely on facial landmarks or pixel-level inconsistencies, suffering from poor generalizability and posing privacy risks due to their dependence on real-face data. Method: We propose the first style-feature anomaly-based deepfake detection framework that requires no authentic facial images during training or inference. By modeling intrinsic stylistic distribution shifts in generated images, it inherently preserves facial privacy. Our approach integrates diverse datasets and state-of-the-art face-swapping techniques for robustness evaluation and employs deep neural networks to quantify stylistic inconsistency. Contribution/Results: Extensive experiments demonstrate that our method achieves state-of-the-art performance across cross-dataset and cross-face-swapping-algorithm benchmarks, offering high accuracy, strong generalization, and practical deployability—without compromising subject privacy.

Technology Category

Application Category

📝 Abstract
Detecting deepfakes involving face-swaps presents a significant challenge, particularly in real-world scenarios where anyone can perform face-swapping with freely available tools and apps without any technical knowledge. Existing deepfake detection methods rely on facial landmarks or inconsistencies in pixel-level features and often struggle with face-swap deepfakes, where the source face is seamlessly blended into the target image or video. The prevalence of face-swap is evident in everyday life, where it is used to spread false information, damage reputations, manipulate political opinions, create non-consensual intimate deepfakes (NCID), and exploit children by enabling the creation of child sexual abuse material (CSAM). Even prominent public figures are not immune to its impact, with numerous deepfakes of them circulating widely across social media platforms. Another challenge faced by deepfake detection methods is the creation of datasets that encompass a wide range of variations, as training models require substantial amounts of data. This raises privacy concerns, particularly regarding the processing and storage of personal facial data, which could lead to unauthorized access or misuse. Our key idea is to identify these style discrepancies to detect face-swapped images effectively without accessing the real facial image. We perform comprehensive evaluations using multiple datasets and face-swapping methods, which showcases the effectiveness of SafeVision in detecting face-swap deepfakes across diverse scenarios. SafeVision offers a reliable and scalable solution for detecting face-swaps in a privacy preserving manner, making it particularly effective in challenging real-world applications. To the best of our knowledge, SafeVision is the first deepfake detection using style features while providing inherent privacy protection.
Problem

Research questions and friction points this paper is trying to address.

Detecting face-swap deepfakes in real-world scenarios
Addressing privacy concerns in deepfake dataset creation
Identifying style discrepancies without real facial data access
Innovation

Methods, ideas, or system contributions that make the work stand out.

Style-based anomaly detection for deepfakes
Privacy-preserving face-swap detection method
No need for real facial image access
🔎 Similar Papers
No similar papers found.
S
Sudev Kumar Padhi
Indian Institute of Technology Bhilai, Durg 491002, India
Harshit Kumar
Harshit Kumar
Whiterabbit.ai, Inc.
Deep LearningSecurityHardware Security and Trust
U
Umesh Kashyap
Indian Institute of Technology Bhilai, Durg 491002, India
S
Sk. Subidh Ali
Indian Institute of Technology Bhilai, Durg 491002, India