AdvPaint: Protecting Images from Inpainting Manipulation via Adversarial Attention Disruption

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To counter security threats posed by malicious use of diffusion models for localized image manipulation (e.g., face swapping), this paper proposes an attention-level adversarial defense framework tailored for inpainting tasks. Methodologically, it introduces perturbations—novelly—at both self-attention and cross-attention layers to disrupt semantic modeling and prompt-image interaction; further, it devises a two-stage region-aware perturbation strategy, incorporating expanded bounding boxes for mask-adaptive perturbation generation, thereby enhancing robustness against diverse tampered regions. Contributions include: (1) the first attention-level adversarial mechanism specifically designed for diffusion-based image inpainting; (2) a mask-aware two-stage perturbation generation paradigm; and (3) a joint evaluation metric optimizing both FID and inpainting fidelity. Experiments demonstrate that our method improves FID by over 100 points across multiple benchmarks, significantly reduces unauthorized inpainting success rates, and outperforms existing defense approaches in comprehensive performance.

Technology Category

Application Category

📝 Abstract
The outstanding capability of diffusion models in generating high-quality images poses significant threats when misused by adversaries. In particular, we assume malicious adversaries exploiting diffusion models for inpainting tasks, such as replacing a specific region with a celebrity. While existing methods for protecting images from manipulation in diffusion-based generative models have primarily focused on image-to-image and text-to-image tasks, the challenge of preventing unauthorized inpainting has been rarely addressed, often resulting in suboptimal protection performance. To mitigate inpainting abuses, we propose ADVPAINT, a novel defensive framework that generates adversarial perturbations that effectively disrupt the adversary's inpainting tasks. ADVPAINT targets the self- and cross-attention blocks in a target diffusion inpainting model to distract semantic understanding and prompt interactions during image generation. ADVPAINT also employs a two-stage perturbation strategy, dividing the perturbation region based on an enlarged bounding box around the object, enhancing robustness across diverse masks of varying shapes and sizes. Our experimental results demonstrate that ADVPAINT's perturbations are highly effective in disrupting the adversary's inpainting tasks, outperforming existing methods; ADVPAINT attains over a 100-point increase in FID and substantial decreases in precision.
Problem

Research questions and friction points this paper is trying to address.

Prevent unauthorized inpainting in diffusion models
Disrupt adversarial inpainting via attention mechanisms
Enhance robustness against diverse mask shapes and sizes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates adversarial perturbations for inpainting disruption
Targets self- and cross-attention blocks in diffusion models
Employs two-stage perturbation strategy for enhanced robustness
🔎 Similar Papers
No similar papers found.
J
Joonsung Jeon
Korea Advanced Institute of Science and Technology (KAIST)
W
Woo Jae Kim
Korea Advanced Institute of Science and Technology (KAIST)
S
Suhyeon Ha
Korea Advanced Institute of Science and Technology (KAIST)
Sooel Son
Sooel Son
KAIST
Web SecurityPrivacyProgram analysis
S
Sung-eui Yoon
Korea Advanced Institute of Science and Technology (KAIST)