BadBlocks: Low-Cost and Stealthy Backdoor Attacks Tailored for Text-to-Image Diffusion Models

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models are vulnerable to backdoor attacks, yet existing defenses effectively detect most known trigger mechanisms. This paper identifies a novel, stealthy backdoor threat and proposes BadBlocks—the first lightweight, module-level backdoor attack tailored for text-to-image diffusion models. Instead of full-model fine-tuning, BadBlocks selectively poisons specific submodules within the UNet architecture to embed visual or textual triggers. It achieves the attack with only 30% computational overhead and 20% GPU time compared to standard fine-tuning, drastically lowering the barrier for deployment on consumer-grade GPUs. Critically, generated image quality remains high (no significant FID degradation), and the attack evades state-of-the-art detection frameworks—including those based on attention analysis. The core innovation lies in the synergistic design of precise, submodule-targeted contamination and ultra-low resource consumption.

Technology Category

Application Category

📝 Abstract
In recent years,Diffusion models have achieved remarkable progress in the field of image generation.However,recent studies have shown that diffusion models are susceptible to backdoor attacks,in which attackers can manipulate the output by injecting covert triggers such as specific visual patterns or textual phrases into the training dataset.Fortunately,with the continuous advancement of defense techniques,defenders have become increasingly capable of identifying and mitigating most backdoor attacks using visual inspection and neural network-based detection methods.However,in this paper,we identify a novel type of backdoor threat that is more lightweight and covert than existing approaches,which we name BadBlocks,requires only about 30% of the computational resources and 20% GPU time typically needed by previous backdoor attacks,yet it successfully injects backdoors and evades the most advanced defense frameworks.BadBlocks enables attackers to selectively contaminate specific blocks within the UNet architecture of diffusion models while maintaining normal functionality in the remaining components.Experimental results demonstrate that BadBlocks achieves a high attack success rate (ASR) and low perceptual quality loss (as measured by FID Score),even under extremely constrained computational resources and GPU time.Moreover,BadBlocks is able to bypass existing defense frameworks,especially the attention-based backdoor detection method, highlighting it as a novel and noteworthy threat.Ablation studies further demonstrate that effective backdoor injection does not require fine-tuning the entire network and highlight the pivotal role of certain neural network layers in backdoor mapping.Overall,BadBlocks significantly reduces the barrier to conducting backdoor attacks in all aspects.It enables attackers to inject backdoors into large-scale diffusion models even using consumer-grade GPUs.
Problem

Research questions and friction points this paper is trying to address.

Novel lightweight backdoor attack on diffusion models
Evades advanced defenses with low resource usage
Selectively contaminates UNet blocks without full fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight backdoor attack with minimal resources
Selective contamination of UNet architecture blocks
Evades advanced defense frameworks effectively
🔎 Similar Papers
No similar papers found.
Y
Yu Pan
School of Computer and Information Engineering, Shanghai Polytechnic University, China
J
Jiahao Chen
School of Computer and Information Engineering, Shanghai Polytechnic University, China
L
Lin Wang
School of Computer and Information Engineering, Shanghai Polytechnic University, China
B
Bingrong Dai
Shanghai Development Center of Computer Software Technology, China
Yi Du
Yi Du
Chinese Academy of Sciences
data miningknowledge engineeringAI for Science