🤖 AI Summary
This work systematically investigates white-box backdoor threats during the training phase of large language models (LLMs), addressing the triad of attack modeling, defense mechanisms, and evaluation methodologies. Existing backdoor taxonomies are ill-suited for LLMs’ unique architectural and behavioral characteristics, especially in high-stakes domains such as healthcare, finance, and education. Method: We adapt general machine learning backdoor classification to the LLM context, establishing a unified attack-defense taxonomy; propose the first LLM-specific co-analytical paradigm for attacks and defenses; and synthesize state-of-the-art techniques—including trigger injection, data poisoning, model manipulation, detection, and purification—into a structured knowledge graph via systematic literature review and multidimensional comparative analysis. Contribution/Results: We deliver an extensible benchmarking framework for rigorous evaluation and robustness-enhancing strategies, providing both theoretical foundations and practical tools to advance secure and trustworthy LLM development.
📝 Abstract
Large Language Models (LLMs) have achieved significantly advanced capabilities in understanding and generating human language text, which have gained increasing popularity over recent years. Apart from their state-of-the-art natural language processing (NLP) performance, considering their widespread usage in many industries, including medicine, finance, education, etc., security concerns over their usage grow simultaneously. In recent years, the evolution of backdoor attacks has progressed with the advancement of defense mechanisms against them and more well-developed features in the LLMs. In this paper, we adapt the general taxonomy for classifying machine learning attacks on one of the subdivisions - training-time white-box backdoor attacks. Besides systematically classifying attack methods, we also consider the corresponding defense methods against backdoor attacks. By providing an extensive summary of existing works, we hope this survey can serve as a guideline for inspiring future research that further extends the attack scenarios and creates a stronger defense against them for more robust LLMs.