BeDKD: Backdoor Defense based on Dynamic Knowledge Distillation and Directional Mapping Modulator

📅 2025-08-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing backdoor defense methods rely heavily on large volumes of clean data yet struggle to fully eliminate residual trigger effects, resulting in persistently high attack success rates (ASRs). This paper proposes a lightweight defense framework integrating a directional mapping module and adversarial knowledge distillation. The former precisely identifies poisoned samples, while the latter employs an iterative mechanism—comprising trust-based distillation and penalty-based distillation—to dynamically reinforce legitimate decision pathways and suppress backdoor mappings. Crucially, the method achieves effective model purification using only a small number of clean and poisoned samples. Extensive experiments across three benchmark datasets demonstrate that our approach reduces ASR from over 98% to below 5%, while preserving the original model’s clean accuracy nearly intact. It significantly outperforms current state-of-the-art defenses in both efficacy and efficiency.

Technology Category

Application Category

📝 Abstract
Although existing backdoor defenses have gained success in mitigating backdoor attacks, they still face substantial challenges. In particular, most of them rely on large amounts of clean data to weaken the backdoor mapping but generally struggle with residual trigger effects, resulting in persistently high attack success rates (ASR). Therefore, in this paper, we propose a novel Backdoor defense method based on Directional mapping module and adversarial Knowledge Distillation (BeDKD), which balances the trade-off between defense effectiveness and model performance using a small amount of clean and poisoned data. We first introduce a directional mapping module to identify poisoned data, which destroys clean mapping while keeping backdoor mapping on a small set of flipped clean data. Then, the adversarial knowledge distillation is designed to reinforce clean mapping and suppress backdoor mapping through a cycle iteration mechanism between trust and punish distillations using clean and identified poisoned data. We conduct experiments to mitigate mainstream attacks on three datasets, and experimental results demonstrate that BeDKD surpasses the state-of-the-art defenses and reduces the ASR by 98% without significantly reducing the CACC. Our code are available in https://github.com/CAU-ISS-Lab/Backdoor-Attack-Defense-LLMs/tree/main/BeDKD.
Problem

Research questions and friction points this paper is trying to address.

Defending against backdoor attacks with minimal clean data
Reducing residual trigger effects in backdoor defenses
Balancing defense effectiveness and model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Knowledge Distillation balances defense and performance
Directional Mapping Modulator identifies poisoned data effectively
Cycle iteration mechanism suppresses backdoor mapping
Zhengxian Wu
Zhengxian Wu
Tsinghua University
Computer Vision、Large Language Model
J
Juan Wen
College of information electrical and engineering, China Agricultural University
W
Wanli Peng
College of information electrical and engineering, China Agricultural University
Yinghan Zhou
Yinghan Zhou
China Agricultral University
C
Changtong Dou
College of information electrical and engineering, China Agricultural University
Yiming Xue
Yiming Xue
CAU
data hidingsignal processing