Towards Invisible Backdoor Attack on Text-to-Image Diffusion Model

📅 2025-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing backdoor attacks against text-to-image diffusion models exhibit two detectable anomalies: semantic inconsistency and cross-attention misalignment between prompts and generated images. To address these, we propose a stealthy backdoor attack that introduces syntactic structure—rather than pixel-level perturbations—as the trigger, deliberately violating semantic consistency between prompts and outputs. We further design a Kernel Maximum Mean Discrepancy (KMMD) regularization to align the cross-attention distributions of clean and backdoored samples, eliminating attentional discrepancies. Our method integrates syntactic-trigger injection, KMMD-based attention alignment, diffusion model fine-tuning, and adversarial training. Experiments demonstrate a 97.5% attack success rate and evasion of three state-of-the-art detectors for over 98% of backdoored samples. Both stealthiness and effectiveness significantly surpass prior approaches, establishing new benchmarks in robust, undetectable backdoor attacks on generative models.

Technology Category

Application Category

📝 Abstract
Backdoor attacks targeting text-to-image diffusion models have advanced rapidly, enabling attackers to implant malicious triggers into these models to manipulate their outputs. However, current backdoor samples often exhibit two key abnormalities compared to benign samples: 1) Semantic Consistency, where backdoor prompts tend to generate images with similar semantic content even with significant textual variations to the prompts; 2) Attention Consistency, where the trigger induces consistent structural responses in the cross-attention maps. These consistencies leave detectable traces for defenders, making backdoors easier to identify. To enhance the stealthiness of backdoor samples, we propose a novel Invisible Backdoor Attack (IBA) by explicitly mitigating these consistencies. Specifically, our approach leverages syntactic structures as backdoor triggers to amplify the sensitivity to textual variations, effectively breaking down the semantic consistency. Besides, a regularization method based on Kernel Maximum Mean Discrepancy (KMMD) is proposed to align the distribution of cross-attention responses between backdoor and benign samples, thereby disrupting attention consistency. Extensive experiments demonstrate that our IBA achieves a 97.5% attack success rate while exhibiting stronger resistance to defenses, with an average of over 98% backdoor samples bypassing three state-of-the-art detection mechanisms. The code is available at https://github.com/Robin-WZQ/IBA.
Problem

Research questions and friction points this paper is trying to address.

Enhancing stealthiness of backdoor attacks on diffusion models
Mitigating semantic consistency in backdoor-generated images
Disrupting attention consistency to evade detection mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses syntactic structures as backdoor triggers
Applies KMMD regularization for attention alignment
Achieves high attack success with stealth
🔎 Similar Papers
No similar papers found.
J
Jie Zhang
Key Laboratory of AI Safety of CAS, Institute of Computing Technology, Chinese Academy of Sciences (CAS), Beijing, China; University of Chinese Academy of Sciences, Beijing, China
Zhongqi Wang
Zhongqi Wang
Institute of Computing Technology, Chinese Academy of Sciences
Model Robustness
Shiguang Shan
Shiguang Shan
Professor of Institute of Computing Technology, Chinese Academy of Sciences
Computer VisionPattern RecognitionMachine LearningFace Recognition
X
Xilin Chen
Key Laboratory of AI Safety of CAS, Institute of Computing Technology, Chinese Academy of Sciences (CAS), Beijing, China; University of Chinese Academy of Sciences, Beijing, China