Taxonomy, Evaluation and Exploitation of IPI-Centric LLM Agent Defense Frameworks

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language model (LLM) agents with function-calling capabilities are vulnerable to indirect prompt injection (IPI) attacks, yet systematic security analysis and principled defense frameworks remain lacking. Method: Adopting a Systematization of Knowledge (SoK) approach, this work integrates taxonomy construction, adversarial experimentation, and case studies to conduct a comprehensive security analysis of IPI. Contribution/Results: We propose the first five-dimensional classification framework for IPI defenses, unifying and categorizing existing approaches; identify six fundamental root causes of defense failure; design three novel adaptive IPI attack strategies, achieving an average 42.6% increase in success rate in empirical evaluation; and introduce a multi-dimensional evaluation framework that jointly balances security robustness and functional usability. This work provides both theoretical foundations and practical guidelines for the design, selection, and optimization of IPI mitigation mechanisms in LLM agents.

Technology Category

Application Category

📝 Abstract
Large Language Model (LLM)-based agents with function-calling capabilities are increasingly deployed, but remain vulnerable to Indirect Prompt Injection (IPI) attacks that hijack their tool calls. In response, numerous IPI-centric defense frameworks have emerged. However, these defenses are fragmented, lacking a unified taxonomy and comprehensive evaluation. In this Systematization of Knowledge (SoK), we present the first comprehensive analysis of IPI-centric defense frameworks. We introduce a comprehensive taxonomy of these defenses, classifying them along five dimensions. We then thoroughly assess the security and usability of representative defense frameworks. Through analysis of defensive failures in the assessment, we identify six root causes of defense circumvention. Based on these findings, we design three novel adaptive attacks that significantly improve attack success rates targeting specific frameworks, demonstrating the severity of the flaws in these defenses. Our paper provides a foundation and critical insights for the future development of more secure and usable IPI-centric agent defense frameworks.
Problem

Research questions and friction points this paper is trying to address.

Classifying IPI defense frameworks through systematic taxonomy development
Evaluating security and usability vulnerabilities in existing defense mechanisms
Developing novel attacks to expose critical flaws in protection systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed comprehensive taxonomy for defense frameworks
Assessed security and usability of representative frameworks
Designed adaptive attacks revealing defense vulnerabilities
🔎 Similar Papers
Z
Zimo Ji
The Hong Kong University of Science and Technology
Xunguang Wang
Xunguang Wang
The Hong Kong University of Science and Technology
AI Safety & SecurityAdversarial Machine Learning
Zongjie Li
Zongjie Li
HKUST
Large Language Model for Code
P
Pingchuan Ma
Zhejiang University of Technology
Y
Yudong Gao
The Hong Kong University of Science and Technology
Daoyuan Wu
Daoyuan Wu
Lingnan University, Hong Kong. Past Affiliation: HKUST; NTU; CUHK; SMU; PolyU
Large Language ModelAI SecurityBlockchain SecurityMobile SecuritySoftware Security
X
Xincheng Yan
School of Cyber Science and Engineering, Southeast University
T
Tian Tian
ZTE Corporation
S
Shuai Wang
The Hong Kong University of Science and Technology