LLM in the Middle: A Systematic Review of Threats and Mitigations to Real-World LLM-based Systems

📅 2025-09-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses dual security and privacy threats confronting generative AI—particularly large language models (LLMs)—in real-world deployment, encompassing both conventional software vulnerabilities and LLM-specific attacks (e.g., prompt injection, training data extraction, model inversion). Adopting a full software lifecycle perspective, we systematically integrate LLM-intrinsic properties (e.g., black-box reasoning, context sensitivity) with threat modeling to propose a novel, severity- and scenario-aware hierarchical threat taxonomy spanning development, integration, deployment, and operation phases. We further establish a precise mapping framework linking defensive strategies to specific attack types and lifecycle stages. Through a comprehensive literature review and integrated risk assessment, we deliver the first structured LLM security mitigation guide—identifying critical defense gaps, providing practitioners with actionable countermeasures, and outlining key directions for future research.

Technology Category

Application Category

📝 Abstract
The success and wide adoption of generative AI (GenAI), particularly large language models (LLMs), has attracted the attention of cybercriminals seeking to abuse models, steal sensitive data, or disrupt services. Moreover, providing security to LLM-based systems is a great challenge, as both traditional threats to software applications and threats targeting LLMs and their integration must be mitigated. In this survey, we shed light on security and privacy concerns of such LLM-based systems by performing a systematic review and comprehensive categorization of threats and defensive strategies considering the entire software and LLM life cycles. We analyze real-world scenarios with distinct characteristics of LLM usage, spanning from development to operation. In addition, threats are classified according to their severity level and to which scenarios they pertain, facilitating the identification of the most relevant threats. Recommended defense strategies are systematically categorized and mapped to the corresponding life cycle phase and possible attack strategies they attenuate. This work paves the way for consumers and vendors to understand and efficiently mitigate risks during integration of LLMs in their respective solutions or organizations. It also enables the research community to benefit from the discussion of open challenges and edge cases that may hinder the secure and privacy-preserving adoption of LLM-based systems.
Problem

Research questions and friction points this paper is trying to address.

Systematically review security threats to LLM-based systems
Classify threats and mitigations across software and LLM lifecycles
Analyze real-world LLM usage scenarios from development to operation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic review of LLM threats
Classification by severity and scenarios
Defense strategies mapped to lifecycle
🔎 Similar Papers
No similar papers found.
V
Vitor Hugo Galhardo Moia
Instituto de Pesquisas Eldorado, Av. Alan Turing, 275 - Cidade Universitária, Campinas - SP, 13083-898, Brazil
I
Igor Jochem Sanz
Instituto de Pesquisas Eldorado, Av. Alan Turing, 275 - Cidade Universitária, Campinas - SP, 13083-898, Brazil
Gabriel Antonio Fontes Rebello
Gabriel Antonio Fontes Rebello
Instituto de Pesquisas Eldorado
BlockchainComputer NetworksInformation SecurityIntrusion Detection
R
Rodrigo Duarte de Meneses
Instituto de Pesquisas Eldorado, Av. Alan Turing, 275 - Cidade Universitária, Campinas - SP, 13083-898, Brazil
Briland Hitaj
Briland Hitaj
SRI International
securityprivacydeep learningfederated learningpassword security
Ulf Lindqvist
Ulf Lindqvist
Senior Technical Director, SRI International
Information SecurityNetwork SecurityComputer Security