Sealing The Backdoor: Unlearning Adversarial Text Triggers In Diffusion Models Using Knowledge Distillation

πŸ“… 2025-08-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Text-to-image diffusion models are vulnerable to textual backdoor attacks, where stealthy trigger tokens injected during training cause the model to generate manipulated images upon activation; existing defenses struggle to effectively mitigate such attacks. This paper proposes an attention-level intervention framework based on knowledge distillation, innovatively integrating self-knowledge distillation with cross-attention guidance to enable targeted unlearning of poisoned text–image associations. Leveraging the observation that the model still generates clean images in the absence of triggers, we construct a supervision signal to precisely neutralize malicious responses within the cross-attention layers. Experiments demonstrate 100% pixel-level backdoor removal and 93% success rate against style-based attacks, without degrading generation quality, diversity, or model robustness.

Technology Category

Application Category

πŸ“ Abstract
Text-to-image diffusion models have revolutionized generative AI, but their vulnerability to backdoor attacks poses significant security risks. Adversaries can inject imperceptible textual triggers into training data, causing models to generate manipulated outputs. Although text-based backdoor defenses in classification models are well-explored, generative models lack effective mitigation techniques against. We address this by selectively erasing the model's learned associations between adversarial text triggers and poisoned outputs, while preserving overall generation quality. Our approach, Self-Knowledge Distillation with Cross-Attention Guidance (SKD-CAG), uses knowledge distillation to guide the model in correcting responses to poisoned prompts while maintaining image quality by exploiting the fact that the backdoored model still produces clean outputs in the absence of triggers. Using the cross-attention mechanism, SKD-CAG neutralizes backdoor influences at the attention level, ensuring the targeted removal of adversarial effects. Extensive experiments show that our method outperforms existing approaches, achieving removal accuracy 100% for pixel backdoors and 93% for style-based attacks, without sacrificing robustness or image fidelity. Our findings highlight targeted unlearning as a promising defense to secure generative models. Code and model weights can be found at https://github.com/Mystic-Slice/Sealing-The-Backdoor .
Problem

Research questions and friction points this paper is trying to address.

Removing adversarial text triggers from diffusion models
Preserving image quality while eliminating backdoors
Using knowledge distillation to neutralize backdoor influences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge distillation for backdoor unlearning
Cross-attention guidance to neutralize triggers
Selective erasure while preserving image quality
πŸ”Ž Similar Papers
No similar papers found.