A Survey of Attacks on Large Language Models

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The deployment of large language models (LLMs) and LLM-based agents in safety-critical domains—such as healthcare, finance, and autonomous driving—exposes severe security and reliability risks, including adversarial misuse, privacy leakage, and service disruption. Method: We present the first comprehensive, lifecycle-oriented taxonomy of adversarial attacks against LLMs and LLM agents, structured into three phases: training-time attacks, inference-time attacks, and availability/integrity attacks. Our analysis integrates bibliometric review, technical attribution, threat modeling, and defense efficacy evaluation, synthesizing state-of-the-art attack paradigms and mitigation strategies published between 2020 and 2024. Contribution/Results: We construct the first structured, extensible knowledge graph of LLM adversarial attacks—a foundational resource that fills a critical gap in systematic security surveys. This work delivers an authoritative risk-awareness framework for researchers and practitioners, along with a pragmatic, actionable defense roadmap.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) and LLM-based agents have been widely deployed in a wide range of applications in the real world, including healthcare diagnostics, financial analysis, customer support, robotics, and autonomous driving, expanding their powerful capability of understanding, reasoning, and generating natural languages. However, the wide deployment of LLM-based applications exposes critical security and reliability risks, such as the potential for malicious misuse, privacy leakage, and service disruption that weaken user trust and undermine societal safety. This paper provides a systematic overview of the details of adversarial attacks targeting both LLMs and LLM-based agents. These attacks are organized into three phases in LLMs: Training-Phase Attacks, Inference-Phase Attacks, and Availability&Integrity Attacks. For each phase, we analyze the details of representative and recently introduced attack methods along with their corresponding defenses. We hope our survey will provide a good tutorial and a comprehensive understanding of LLM security, especially for attacks on LLMs. We desire to raise attention to the risks inherent in widely deployed LLM-based applications and highlight the urgent need for robust mitigation strategies for evolving threats.
Problem

Research questions and friction points this paper is trying to address.

Identifies security risks in widely deployed LLM applications
Analyzes adversarial attacks across LLM training and inference phases
Highlights need for robust defenses against evolving LLM threats
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic survey of adversarial attacks on LLMs
Categorizes attacks into three distinct phases
Analyzes attack methods and corresponding defenses