🤖 AI Summary
Existing backdoor defense methods rely heavily on large volumes of clean data yet struggle to fully eliminate residual trigger effects, resulting in persistently high attack success rates (ASRs). This paper proposes a lightweight defense framework integrating a directional mapping module and adversarial knowledge distillation. The former precisely identifies poisoned samples, while the latter employs an iterative mechanism—comprising trust-based distillation and penalty-based distillation—to dynamically reinforce legitimate decision pathways and suppress backdoor mappings. Crucially, the method achieves effective model purification using only a small number of clean and poisoned samples. Extensive experiments across three benchmark datasets demonstrate that our approach reduces ASR from over 98% to below 5%, while preserving the original model’s clean accuracy nearly intact. It significantly outperforms current state-of-the-art defenses in both efficacy and efficiency.
📝 Abstract
Although existing backdoor defenses have gained success in mitigating backdoor attacks, they still face substantial challenges. In particular, most of them rely on large amounts of clean data to weaken the backdoor mapping but generally struggle with residual trigger effects, resulting in persistently high attack success rates (ASR). Therefore, in this paper, we propose a novel Backdoor defense method based on Directional mapping module and adversarial Knowledge Distillation (BeDKD), which balances the trade-off between defense effectiveness and model performance using a small amount of clean and poisoned data. We first introduce a directional mapping module to identify poisoned data, which destroys clean mapping while keeping backdoor mapping on a small set of flipped clean data. Then, the adversarial knowledge distillation is designed to reinforce clean mapping and suppress backdoor mapping through a cycle iteration mechanism between trust and punish distillations using clean and identified poisoned data. We conduct experiments to mitigate mainstream attacks on three datasets, and experimental results demonstrate that BeDKD surpasses the state-of-the-art defenses and reduces the ASR by 98% without significantly reducing the CACC. Our code are available in https://github.com/CAU-ISS-Lab/Backdoor-Attack-Defense-LLMs/tree/main/BeDKD.