🤖 AI Summary
To address backdoor attack risks in large language models (LLMs) introduced via third-party training, this paper presents a systematic survey of backdoor attacks and defenses under fine-tuning paradigms. We propose a novel three-dimensional taxonomy tailored to LLMs, categorizing attacks by fine-tuning strategy: full-parameter fine-tuning, parameter-efficient fine-tuning (PEFT)—e.g., LoRA and Adapter—and fine-tuning-free approaches. Our analysis uncovers previously unrecognized vulnerabilities in PEFT methods and comparatively evaluates attack efficacy across success rate, stealthiness, and transferability. We introduce the concept of “fine-tuning-free backdoors” as an emerging threat vector, bridging a critical gap in LLM-specific threat modeling. Through bibliometric analysis and technical synthesis, we integrate data poisoning, weight manipulation, trigger design, and defense mechanisms into the first comprehensive landscape of LLM backdoor attacks and defenses—providing foundational insights for secure fine-tuning, robust evaluation, and trustworthy deployment.
📝 Abstract
Large Language Models (LLMs), which bridge the gap between human language understanding and complex problem-solving, achieve state-of-the-art performance on several NLP tasks, particularly in few-shot and zero-shot settings. Despite the demonstrable efficacy of LLMs, due to constraints on computational resources, users have to engage with open-source language models or outsource the entire training process to third-party platforms. However, research has demonstrated that language models are susceptible to potential security vulnerabilities, particularly in backdoor attacks. Backdoor attacks are designed to introduce targeted vulnerabilities into language models by poisoning training samples or model weights, allowing attackers to manipulate model responses through malicious triggers. While existing surveys on backdoor attacks provide a comprehensive overview, they lack an in-depth examination of backdoor attacks specifically targeting LLMs. To bridge this gap and grasp the latest trends in the field, this paper presents a novel perspective on backdoor attacks for LLMs by focusing on fine-tuning methods. Specifically, we systematically classify backdoor attacks into three categories: full-parameter fine-tuning, parameter-efficient fine-tuning, and no fine-tuning Based on insights from a substantial review, we also discuss crucial issues for future research on backdoor attacks, such as further exploring attack algorithms that do not require fine-tuning, or developing more covert attack algorithms.