🤖 AI Summary
Large language model (LLM) agents with function-calling capabilities are vulnerable to indirect prompt injection (IPI) attacks, yet systematic security analysis and principled defense frameworks remain lacking.
Method: Adopting a Systematization of Knowledge (SoK) approach, this work integrates taxonomy construction, adversarial experimentation, and case studies to conduct a comprehensive security analysis of IPI.
Contribution/Results: We propose the first five-dimensional classification framework for IPI defenses, unifying and categorizing existing approaches; identify six fundamental root causes of defense failure; design three novel adaptive IPI attack strategies, achieving an average 42.6% increase in success rate in empirical evaluation; and introduce a multi-dimensional evaluation framework that jointly balances security robustness and functional usability. This work provides both theoretical foundations and practical guidelines for the design, selection, and optimization of IPI mitigation mechanisms in LLM agents.
📝 Abstract
Large Language Model (LLM)-based agents with function-calling capabilities are increasingly deployed, but remain vulnerable to Indirect Prompt Injection (IPI) attacks that hijack their tool calls. In response, numerous IPI-centric defense frameworks have emerged. However, these defenses are fragmented, lacking a unified taxonomy and comprehensive evaluation. In this Systematization of Knowledge (SoK), we present the first comprehensive analysis of IPI-centric defense frameworks. We introduce a comprehensive taxonomy of these defenses, classifying them along five dimensions. We then thoroughly assess the security and usability of representative defense frameworks. Through analysis of defensive failures in the assessment, we identify six root causes of defense circumvention. Based on these findings, we design three novel adaptive attacks that significantly improve attack success rates targeting specific frameworks, demonstrating the severity of the flaws in these defenses. Our paper provides a foundation and critical insights for the future development of more secure and usable IPI-centric agent defense frameworks.