A Systematic Review of Poisoning Attacks Against Large Language Models

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing terminological inconsistency and the absence of a rigorous threat model in backdoor poisoning attacks against generative large language models (LLMs), this paper presents a systematic literature review and introduces the first unified poisoning threat model tailored to LLMs. Grounded in the generative characteristics of LLMs, the model categorizes attacks into four paradigms: concept poisoning, stealthy poisoning, persistent poisoning, and task-specific poisoning, and defines six quantitative evaluation metrics. By synthesizing existing work, the study clarifies the security impact boundaries of poisoning attacks, unifies core terminology and assessment standards, and fills a critical theoretical gap in generative AI security modeling. The resulting framework provides a reusable foundation for designing poisoning defenses, developing robust training methodologies, and constructing standardized benchmarks—thereby advancing both theoretical understanding and practical mitigation strategies for LLM poisoning vulnerabilities.

Technology Category

Application Category

📝 Abstract
With the widespread availability of pretrained Large Language Models (LLMs) and their training datasets, concerns about the security risks associated with their usage has increased significantly. One of these security risks is the threat of LLM poisoning attacks where an attacker modifies some part of the LLM training process to cause the LLM to behave in a malicious way. As an emerging area of research, the current frameworks and terminology for LLM poisoning attacks are derived from earlier classification poisoning literature and are not fully equipped for generative LLM settings. We conduct a systematic review of published LLM poisoning attacks to clarify the security implications and address inconsistencies in terminology across the literature. We propose a comprehensive poisoning threat model applicable to categorize a wide range of LLM poisoning attacks. The poisoning threat model includes four poisoning attack specifications that define the logistics and manipulation strategies of an attack as well as six poisoning metrics used to measure key characteristics of an attack. Under our proposed framework, we organize our discussion of published LLM poisoning literature along four critical dimensions of LLM poisoning attacks: concept poisons, stealthy poisons, persistent poisons, and poisons for unique tasks, to better understand the current landscape of security risks.
Problem

Research questions and friction points this paper is trying to address.

Analyzing security risks of poisoning attacks on LLMs
Proposing a threat model for LLM poisoning attacks
Reviewing literature to clarify terminology and implications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes comprehensive poisoning threat model
Defines four poisoning attack specifications
Introduces six poisoning metrics
🔎 Similar Papers
No similar papers found.