ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification

πŸ“… 2024-09-20
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address identity residual leakage and stigmatization risks arising from deepfake misuse, this paper proposes an end-to-end transferable adversarial perturbation framework. Methodologically, we introduce the Identity Disruption Module (IDM)β€”the first of its kindβ€”which formulates multi-model forgery interference as a multi-task learning problem and incorporates dynamic weighted loss to enhance cross-model generalizability. The framework generates visually imperceptible perturbations via a single encoder-decoder forward pass, enabling plug-and-play integration and joint deployment with adversarial training. Extensive experiments demonstrate that our approach significantly reduces identity recognizability in forged images across multiple state-of-the-art face manipulation models, while evading detection by mainstream forensic and identity recognition systems. It thus achieves a favorable trade-off between defense efficacy and stealth.

Technology Category

Application Category

πŸ“ Abstract
The misuse of deep learning-based facial manipulation poses a significant threat to civil rights. To prevent this fraud at its source, proactive defense has been proposed to disrupt the manipulation process by adding invisible adversarial perturbations into images, making the forged output unconvincing to observers. However, the non-specific disruption against the output may lead to the retention of identifiable facial features, potentially resulting in the stigmatization of the individual. This paper proposes a universal framework for combating facial manipulation, termed ID-Guard. Specifically, this framework operates with a single forward pass of an encoder-decoder network to produce a cross-model transferable adversarial perturbation. A novel Identity Destruction Module (IDM) is introduced to degrade identifiable features in forged faces. We optimize the perturbation generation by framing the disruption of different facial manipulations as a multi-task learning problem, and a dynamic weight strategy is devised to enhance cross-model performance. Experimental results demonstrate that the proposed ID-Guard exhibits strong efficacy in defending against various facial manipulation models, effectively degrading identifiable regions in manipulated images. It also enables disrupted images to evade facial inpainting and image recognition systems. Additionally, ID-Guard can seamlessly function as a plug-and-play component, integrating with other tasks such as adversarial training.
Problem

Research questions and friction points this paper is trying to address.

Combats facial manipulation by breaking identification features
Generates cross-model transferable adversarial perturbations efficiently
Degrades identifiable regions in manipulated images effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Encoder-decoder network for adversarial perturbation
Identity Destruction Module degrades features
Dynamic weight strategy enhances cross-model performance
πŸ”Ž Similar Papers
Zuomin Qu
Zuomin Qu
EPRI of China Southern Power Grid, Sun Yat-sen University
Artificial IntelligenceAI SecurityAIGC
W
Wei Lu
School of Computer Science and Engineering, Guangdong Province Key Laboratory of Information Security Technology, Ministry of Education Key Laboratory of Machine Intelligence and Advanced Computing, Institute of Artificial Intelligence, Sun Yat-sen University, Guangzhou 510006, China
X
Xiangyang Luo
State Key Laboratory of Mathematical Engineering and Advanced Computing, Zhengzhou 450002, China
Q
Qian Wang
School of Cyber Science and Engineering, Wuhan University, Wuhan 430072, China
Xiaochun Cao
Xiaochun Cao
Sun Yat-sen University
Computer VisionArtificial IntelligenceMultimediaMachine Learning