Unlearning Backdoor Attacks for LLMs with Weak-to-Strong Knowledge Distillation

📅 2024-10-18
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the vulnerability of large language models (LLMs) fine-tuned via parameter-efficient fine-tuning (PEFT) to trigger-based backdoor attacks, this paper proposes a weak-to-strong knowledge distillation–driven unlearning method. It leverages a clean, fully fine-tuned small model as the teacher to guide a poisoned, large-scale PEFT student model in safely erasing backdoor features. The approach innovatively integrates feature-alignment knowledge distillation with PEFT, yielding a lightweight, provably convergent defense paradigm. Evaluated across three state-of-the-art LLMs and three prevalent backdoor attack types, the method reduces average attack success rate (ASR) by over 87%, while degrading task accuracy by less than 0.5%. This demonstrates substantial improvements over existing defenses in both robustness and utility preservation.

Technology Category

Application Category

📝 Abstract
Parameter-efficient fine-tuning (PEFT) can bridge the gap between large language models (LLMs) and downstream tasks. However, PEFT has been proven vulnerable to malicious attacks. Research indicates that poisoned LLMs, even after PEFT, retain the capability to activate internalized backdoors when input samples contain predefined triggers. In this paper, we introduce a novel weak-to-strong unlearning algorithm to defend against backdoor attacks based on feature alignment knowledge distillation, named W2SDefense. Specifically, we first train a small-scale language model through full-parameter fine-tuning to serve as the clean teacher model. Then, this teacher model guides the large-scale poisoned student model in unlearning the backdoor, leveraging PEFT. Theoretical analysis suggests that W2SDefense has the potential to enhance the student model's ability to unlearn backdoor features, preventing the activation of the backdoor. We conduct experiments on text classification tasks involving three state-of-the-art language models and three different backdoor attack algorithms. Our empirical results demonstrate the outstanding performance of W2SDefense in defending against backdoor attacks without compromising model performance.
Problem

Research questions and friction points this paper is trying to address.

Defending LLMs against backdoor attacks via weak-to-strong unlearning
Enhancing backdoor feature unlearning without performance loss
Mitigating PEFT vulnerabilities in poisoned large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Weak-to-strong knowledge distillation unlearning
PEFT-based backdoor defense with feature alignment
Small teacher model guides poisoned student
🔎 Similar Papers
No similar papers found.
S
Shuai Zhao
Nanyang Technological University, Singapore
Xiaobao Wu
Xiaobao Wu
Research Scientist, Nanyang Technological University
Large Language ModelsMachine LearningNatural Language Processing
C
Cong-Duy Nguyen
Nanyang Technological University, Singapore
Meihuizi Jia
Meihuizi Jia
Nanyang Technological University, Singapore
Yichao Feng
Yichao Feng
Nanyang Technological University
NLP
A
Anh Tuan Luu
Nanyang Technological University, Singapore