PEPPER: Perception-Guided Perturbation for Robust Backdoor Defense in Text-to-Image Diffusion Models

📅 2025-11-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Text-to-image (T2I) diffusion models are vulnerable to prompt-level backdoor attacks, where malicious trigger tokens induce generation of harmful content. To address this, we propose PromptSanity—a lightweight, encoder-centric defense that operates without modifying model architecture or retraining. It employs semantics-aware prompt rewriting guided by semantic distance constraints to dilute the trigger’s meaning, coupled with visual consistency preservation and non-occlusive semantic enhancement to maintain image fidelity. PromptSanity is plug-and-play and compatible with existing defenses (e.g., input purification, robust fine-tuning), enabling synergistic robustness improvement. Extensive experiments across multiple T2I models and backdoor benchmarks demonstrate that PromptSanity reduces attack success rates by 87.3% on average, while preserving generation quality—FID and CLIP-Score show no statistically significant degradation. It consistently outperforms baseline defenses in both robustness and image quality retention.

Technology Category

Application Category

📝 Abstract
Recent studies show that text to image (T2I) diffusion models are vulnerable to backdoor attacks, where a trigger in the input prompt can steer generation toward harmful or unintended content. To address this, we introduce PEPPER (PErcePtion Guided PERturbation), a backdoor defense that rewrites the caption into a semantically distant yet visually similar caption while adding unobstructive elements. With this rewriting strategy, PEPPER disrupt the trigger embedded in the input prompt, dilute the influence of trigger tokens and thereby achieve enhanced robustness. Experiments show that PEPPER is particularly effective against text encoder based attacks, substantially reducing attack success while preserving generation quality. Beyond this, PEPPER can be paired with any existing defenses yielding consistently stronger and generalizable robustness than any standalone method. Our code will be released on Github.
Problem

Research questions and friction points this paper is trying to address.

Defends text-to-image models from backdoor attacks using trigger manipulation
Preserves image quality while disrupting malicious prompt embeddings
Enhances existing defenses through semantic caption rewriting techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rewrites captions to disrupt embedded triggers
Adds unobtrusive elements to dilute trigger influence
Pairs with existing defenses for enhanced robustness
🔎 Similar Papers
No similar papers found.