π€ AI Summary
Text-to-image diffusion models are vulnerable to textual backdoor attacks, where stealthy trigger tokens injected during training cause the model to generate manipulated images upon activation; existing defenses struggle to effectively mitigate such attacks. This paper proposes an attention-level intervention framework based on knowledge distillation, innovatively integrating self-knowledge distillation with cross-attention guidance to enable targeted unlearning of poisoned textβimage associations. Leveraging the observation that the model still generates clean images in the absence of triggers, we construct a supervision signal to precisely neutralize malicious responses within the cross-attention layers. Experiments demonstrate 100% pixel-level backdoor removal and 93% success rate against style-based attacks, without degrading generation quality, diversity, or model robustness.
π Abstract
Text-to-image diffusion models have revolutionized generative AI, but their vulnerability to backdoor attacks poses significant security risks. Adversaries can inject imperceptible textual triggers into training data, causing models to generate manipulated outputs. Although text-based backdoor defenses in classification models are well-explored, generative models lack effective mitigation techniques against. We address this by selectively erasing the model's learned associations between adversarial text triggers and poisoned outputs, while preserving overall generation quality. Our approach, Self-Knowledge Distillation with Cross-Attention Guidance (SKD-CAG), uses knowledge distillation to guide the model in correcting responses to poisoned prompts while maintaining image quality by exploiting the fact that the backdoored model still produces clean outputs in the absence of triggers. Using the cross-attention mechanism, SKD-CAG neutralizes backdoor influences at the attention level, ensuring the targeted removal of adversarial effects. Extensive experiments show that our method outperforms existing approaches, achieving removal accuracy 100% for pixel backdoors and 93% for style-based attacks, without sacrificing robustness or image fidelity. Our findings highlight targeted unlearning as a promising defense to secure generative models. Code and model weights can be found at https://github.com/Mystic-Slice/Sealing-The-Backdoor .