🤖 AI Summary
The growing security and privacy risks associated with LLM-based agents necessitate a systematic, end-to-end understanding of cross-layer threats.
Method: This paper introduces the first full-stack analytical framework for LLM agent security, integrating multi-dimensional threat modeling, impact propagation analysis, defense strategy categorization, and real-case-driven explainability evaluation.
Contributions/Results: (1) A four-layer risk taxonomy—spanning data, model, interaction, and environment—is proposed; (2) Twelve novel attack surfaces are identified, enabling the first structured characterization of LLM agent vulnerabilities; (3) Evidence-based, reusable defense principles are distilled, and a research roadmap toward trustworthy LLM agents is articulated. Collectively, this work establishes both theoretical foundations and practical guidelines for the security governance of LLM-based agents.
📝 Abstract
Inspired by the rapid development of Large Language Models (LLMs), LLM agents have evolved to perform complex tasks. LLM agents are now extensively applied across various domains, handling vast amounts of data to interact with humans and execute tasks. The widespread applications of LLM agents demonstrate their significant commercial value; however, they also expose security and privacy vulnerabilities. At the current stage, comprehensive research on the security and privacy of LLM agents is highly needed. This survey aims to provide a comprehensive overview of the newly emerged privacy and security issues faced by LLM agents. We begin by introducing the fundamental knowledge of LLM agents, followed by a categorization and analysis of the threats. We then discuss the impacts of these threats on humans, environment, and other agents. Subsequently, we review existing defensive strategies, and finally explore future trends. Additionally, the survey incorporates diverse case studies to facilitate a more accessible understanding. By highlighting these critical security and privacy issues, the survey seeks to stimulate future research towards enhancing the security and privacy of LLM agents, thereby increasing their reliability and trustworthiness in future applications.