REDEditing: Relationship-Driven Precise Backdoor Poisoning on Text-to-Image Diffusion Models

📅 2025-04-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work uncovers a novel security threat—model-based backdoor poisoning—in text-to-image (T2I) diffusion models. Existing methods rely on fine-tuning, employ imprecise triggers, and are easily detectable. To address these limitations, we propose a training-free, concept-rebinding-driven backdoor attack paradigm. Our approach introduces two foundational principles: *equivalent attribute alignment* and *covert poisoning*. We design an equivalent relation retrieval module and a joint attribute transfer mechanism to enable relation-guided, fine-grained trigger synthesis. Additionally, we impose a knowledge isolation constraint to preserve the model’s original generation fidelity. Experiments demonstrate that our method achieves an 11% higher attack success rate than state-of-the-art baselines. Moreover, inserting merely one line of code improves generation naturalness and enhances backdoor stealth by 24%.

Technology Category

Application Category

📝 Abstract
The rapid advancement of generative AI highlights the importance of text-to-image (T2I) security, particularly with the threat of backdoor poisoning. Timely disclosure and mitigation of security vulnerabilities in T2I models are crucial for ensuring the safe deployment of generative models. We explore a novel training-free backdoor poisoning paradigm through model editing, which is recently employed for knowledge updating in large language models. Nevertheless, we reveal the potential security risks posed by model editing techniques to image generation models. In this work, we establish the principles for backdoor attacks based on model editing, and propose a relationship-driven precise backdoor poisoning method, REDEditing. Drawing on the principles of equivalent-attribute alignment and stealthy poisoning, we develop an equivalent relationship retrieval and joint-attribute transfer approach that ensures consistent backdoor image generation through concept rebinding. A knowledge isolation constraint is proposed to preserve benign generation integrity. Our method achieves an 11% higher attack success rate compared to state-of-the-art approaches. Remarkably, adding just one line of code enhances output naturalness while improving backdoor stealthiness by 24%. This work aims to heighten awareness regarding this security vulnerability in editable image generation models.
Problem

Research questions and friction points this paper is trying to address.

Explores training-free backdoor poisoning in T2I models
Reveals security risks of model editing in image generation
Proposes REDEditing for precise backdoor attacks via concept rebinding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free backdoor poisoning via model editing
Equivalent-attribute alignment and stealthy poisoning
Knowledge isolation constraint for benign integrity
🔎 Similar Papers
No similar papers found.