Robust Identity Perceptual Watermark Against Deepfake Face Swapping

📅 2023-11-02
🏛️ arXiv.org
📈 Citations: 17
Influential: 1
📄 PDF
🤖 AI Summary
Deepfake face-swapping poses severe privacy and provenance risks; existing passive detection methods suffer from poor cross-domain generalization, while active defense approaches compromise visual quality, detection accuracy, or source traceability. This paper proposes the first active, identity-aware watermarking framework that jointly ensures robust detection and precise source attribution. It pioneers embedding facial identity semantics into watermarks, integrating irreversible chaotic encryption with end-to-end adversarial co-training of encoder-decoder networks to achieve invisibility, confidentiality, and strong tamper resistance. A consistency verification mechanism further guarantees reliable embedding and extraction. Experiments demonstrate state-of-the-art detection performance under cross-dataset and cross-Deepfake-method settings, significantly improving visual fidelity, detection accuracy (+5.2%), and provenance reliability—achieving 98.7% source identification accuracy.
📝 Abstract
Notwithstanding offering convenience and entertainment to society, Deepfake face swapping has caused critical privacy issues with the rapid development of deep generative models. Due to imperceptible artifacts in high-quality synthetic images, passive detection models against face swapping in recent years usually suffer performance damping regarding the generalizability issue. Therefore, several studies have been attempted to proactively protect the original images against malicious manipulations by inserting invisible signals in advance. However, the existing proactive defense approaches demonstrate unsatisfactory results with respect to visual quality, detection accuracy, and source tracing ability. In this study, to fulfill the research gap, we propose the first robust identity perceptual watermarking framework that concurrently performs detection and source tracing against Deepfake face swapping proactively. We assign identity semantics regarding the image contents to the watermarks and devise an unpredictable and nonreversible chaotic encryption system to ensure watermark confidentiality. The watermarks are encoded and recovered by jointly training an encoder-decoder framework along with adversarial image manipulations. Falsification and source tracing are accomplished by justifying the consistency between the content-matched identity perceptual watermark and the recovered robust watermark from the image. Extensive experiments demonstrate state-of-the-art detection performance on Deepfake face swapping under both cross-dataset and cross-manipulation settings.
Problem

Research questions and friction points this paper is trying to address.

Proactively protecting original images from malicious Deepfake face swapping manipulations
Overcoming limitations in visual quality and detection accuracy of existing watermarking methods
Enabling simultaneous detection and source tracing through identity-based watermark encryption
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identity semantic watermarking for proactive Deepfake detection
Chaotic encryption system ensuring watermark confidentiality
Joint encoder-decoder training with adversarial manipulations
🔎 Similar Papers
No similar papers found.
T
Tianyi Wang
School of Computing, National University of Singapore, Singapore
M
Mengxiao Huang
Shandong Artificial Intelligence Institute, Qilu University of Technology (Shandong Academy of Sciences), Jinan, China
Harry Cheng
Harry Cheng
National University of Singapore
Diffusion ModelMLLM SecurityDeepfake Detection
B
Bin Ma
School of Cyber Security, Qilu University of Technology (Shandong Academy of Sciences), Jinan, China
Y
Yinglong Wang
Key Laboratory of Computing Power Network and Information Security, Ministry of Education, China