🤖 AI Summary
Existing defenses against face-swapping attacks based on diffusion models struggle to balance perturbation strength and visual fidelity: excessive perturbations distort facial structure, while insufficient ones fail to provide adequate protection. This work proposes FaceDefense, a novel framework that integrates directed facial attribute editing with a new diffusion-based loss through a two-stage alternating optimization strategy. By jointly optimizing for imperceptibility and robustness, FaceDefense generates high-fidelity adversarial examples that effectively resist face-swapping attacks. The method achieves state-of-the-art performance by significantly enhancing defense efficacy without compromising visual quality, thereby overcoming the longstanding trade-off between effectiveness and perceptual realism inherent in prior approaches.
📝 Abstract
Diffusion-based face swapping achieves state-of-the-art performance, yet it also exacerbates the potential harm of malicious face swapping to violate portraiture right or undermine personal reputation. This has spurred the development of proactive defense methods. However, existing approaches face a core trade-off: large perturbations distort facial structures, while small ones weaken protection effectiveness. To address these issues, we propose FaceDefense, an enhanced proactive defense framework against diffusion-based face swapping. Our method introduces a new diffusion loss to strengthen the defensive efficacy of adversarial examples, and employs a directional facial attribute editing to restore perturbation-induced distortions, thereby enhancing visual imperceptibility. A two-phase alternating optimization strategy is designed to generate final perturbed face images. Extensive experiments show that FaceDefense significantly outperforms existing methods in both imperceptibility and defense effectiveness, achieving a superior trade-off.