A Survey on Backdoor Threats in Large Language Models (LLMs): Attacks, Defenses, and Evaluations

📅 2025-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically investigates white-box backdoor threats during the training phase of large language models (LLMs), addressing the triad of attack modeling, defense mechanisms, and evaluation methodologies. Existing backdoor taxonomies are ill-suited for LLMs’ unique architectural and behavioral characteristics, especially in high-stakes domains such as healthcare, finance, and education. Method: We adapt general machine learning backdoor classification to the LLM context, establishing a unified attack-defense taxonomy; propose the first LLM-specific co-analytical paradigm for attacks and defenses; and synthesize state-of-the-art techniques—including trigger injection, data poisoning, model manipulation, detection, and purification—into a structured knowledge graph via systematic literature review and multidimensional comparative analysis. Contribution/Results: We deliver an extensible benchmarking framework for rigorous evaluation and robustness-enhancing strategies, providing both theoretical foundations and practical tools to advance secure and trustworthy LLM development.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have achieved significantly advanced capabilities in understanding and generating human language text, which have gained increasing popularity over recent years. Apart from their state-of-the-art natural language processing (NLP) performance, considering their widespread usage in many industries, including medicine, finance, education, etc., security concerns over their usage grow simultaneously. In recent years, the evolution of backdoor attacks has progressed with the advancement of defense mechanisms against them and more well-developed features in the LLMs. In this paper, we adapt the general taxonomy for classifying machine learning attacks on one of the subdivisions - training-time white-box backdoor attacks. Besides systematically classifying attack methods, we also consider the corresponding defense methods against backdoor attacks. By providing an extensive summary of existing works, we hope this survey can serve as a guideline for inspiring future research that further extends the attack scenarios and creates a stronger defense against them for more robust LLMs.
Problem

Research questions and friction points this paper is trying to address.

Survey on backdoor threats in LLMs
Classification of attack and defense methods
Guideline for future robust LLM research
Innovation

Methods, ideas, or system contributions that make the work stand out.

Classifies training-time white-box backdoor attacks
Summarizes existing defense methods systematically
Inspires future research on robust LLMs
🔎 Similar Papers
No similar papers found.
Yihe Zhou
Yihe Zhou
Zhejiang University
T
Tao Ni
Department of Computer Science, City University of Hong Kong
W
Wei-Bin Lee
Information Security Center, Hon Hai Research Institute; Department of Information Engineering and Computer Science, Feng Chia University
Qingchuan Zhao
Qingchuan Zhao
City University of Hong Kong
Mobile securityIoT securityProgram AnalysisReverse Engineering