Anti-Inpainting: A Proactive Defense against Malicious Diffusion-based Inpainters under Unknown Conditions

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing proactive defenses against diffusion-based malicious image inpainting attacks are limited to known tampering conditions and fail against adaptive manipulations under unknown guidance—such as arbitrary random seeds, prompts, or diffusion model versions. Method: We propose the first robust proactive defense framework for unknown conditions, integrating three core mechanisms: (1) multi-level deep feature extraction, (2) semantic-preserving multi-scale adversarial perturbation generation and selection, and (3) selective distributional deviation optimization. Contribution/Results: Our method achieves cross-condition generalization against black-box diffusion inpainting models—marking the first such capability. Evaluated on InpaintGuardBench and CelebA-HQ, it significantly outperforms state-of-the-art methods. It demonstrates robustness to common image purification operations (e.g., JPEG compression, denoising) and exhibits strong transferability across diverse diffusion model versions (e.g., Stable Diffusion v1.5, v2.1, SDXL).

Technology Category

Application Category

📝 Abstract
As diffusion-based malicious image manipulation becomes increasingly prevalent, multiple proactive defense methods are developed to safeguard images against unauthorized tampering. However, most proactive defense methods only can safeguard images against manipulation under known conditions, and fail to protect images from manipulations guided by tampering conditions crafted by malicious users. To tackle this issue, we propose Anti-Inpainting, a proactive defense method that achieves adequate protection under unknown conditions through a triple mechanism to address this challenge. Specifically, a multi-level deep feature extractor is presented to obtain intricate features during the diffusion denoising process to improve protective effectiveness. We design multi-scale semantic-preserving data augmentation to enhance the transferability of adversarial perturbations across unknown conditions by multi-scale transformations while preserving semantic integrity. In addition, we propose a selection-based distribution deviation optimization strategy to improve the protection of adversarial perturbation against manipulation under diverse random seeds. Extensive experiments indicate the proactive defensive performance of Anti-Inpainting against diffusion-based inpainters guided by unknown conditions in InpaintGuardBench and CelebA-HQ. At the same time, we also demonstrate the proposed approach's robustness under various image purification methods and its transferability across different versions of diffusion models.
Problem

Research questions and friction points this paper is trying to address.

Protects images from unknown malicious diffusion-based inpainting.
Enhances adversarial perturbation transferability across unknown conditions.
Improves defense robustness against diverse random seeds.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-level deep feature extractor enhances protection
Multi-scale semantic-preserving augmentation boosts transferability
Selection-based optimization improves perturbation robustness
🔎 Similar Papers
No similar papers found.
Y
Yimao Guo
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
Zuomin Qu
Zuomin Qu
EPRI of China Southern Power Grid, Sun Yat-sen University
Artificial IntelligenceAI SecurityAIGC
W
Wei Lu
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
X
Xiangyang Luo
State Key Laboratory of Mathematical Engineering and Advanced Computing, Zhengzhou, China