🤖 AI Summary
Existing backdoor attacks against text-to-image diffusion models exhibit two detectable anomalies: semantic inconsistency and cross-attention misalignment between prompts and generated images. To address these, we propose a stealthy backdoor attack that introduces syntactic structure—rather than pixel-level perturbations—as the trigger, deliberately violating semantic consistency between prompts and outputs. We further design a Kernel Maximum Mean Discrepancy (KMMD) regularization to align the cross-attention distributions of clean and backdoored samples, eliminating attentional discrepancies. Our method integrates syntactic-trigger injection, KMMD-based attention alignment, diffusion model fine-tuning, and adversarial training. Experiments demonstrate a 97.5% attack success rate and evasion of three state-of-the-art detectors for over 98% of backdoored samples. Both stealthiness and effectiveness significantly surpass prior approaches, establishing new benchmarks in robust, undetectable backdoor attacks on generative models.
📝 Abstract
Backdoor attacks targeting text-to-image diffusion models have advanced rapidly, enabling attackers to implant malicious triggers into these models to manipulate their outputs. However, current backdoor samples often exhibit two key abnormalities compared to benign samples: 1) Semantic Consistency, where backdoor prompts tend to generate images with similar semantic content even with significant textual variations to the prompts; 2) Attention Consistency, where the trigger induces consistent structural responses in the cross-attention maps. These consistencies leave detectable traces for defenders, making backdoors easier to identify. To enhance the stealthiness of backdoor samples, we propose a novel Invisible Backdoor Attack (IBA) by explicitly mitigating these consistencies. Specifically, our approach leverages syntactic structures as backdoor triggers to amplify the sensitivity to textual variations, effectively breaking down the semantic consistency. Besides, a regularization method based on Kernel Maximum Mean Discrepancy (KMMD) is proposed to align the distribution of cross-attention responses between backdoor and benign samples, thereby disrupting attention consistency. Extensive experiments demonstrate that our IBA achieves a 97.5% attack success rate while exhibiting stronger resistance to defenses, with an average of over 98% backdoor samples bypassing three state-of-the-art detection mechanisms. The code is available at https://github.com/Robin-WZQ/IBA.