🤖 AI Summary
This work addresses the vulnerability of large language models (LLMs) and their agents to prompt injection attacks, a threat exacerbated by the significant performance degradation and lack of interpretability in existing detection methods when applied to long-context scenarios. To overcome these limitations, the authors propose a novel detection mechanism that integrates causal attribution with explicit rule-based reasoning. The approach first employs causal attribution to identify context segments most influential to the model’s output, then leverages a monitoring LLM to evaluate these segments against predefined security rules. This design ensures scalability in long-context settings while substantially improving both detection efficacy and decision transparency. Experimental results demonstrate that the system achieves high detection accuracy on tool-using agents and long-context benchmarks, all while preserving full functional integrity in benign, attack-free environments.
📝 Abstract
Large language models (LLMs) and their applications, such as agents, are highly vulnerable to prompt injection attacks. State-of-the-art prompt injection detection methods have the following limitations: (1) their effectiveness degrades significantly as context length increases, and (2) they lack explicit rules that define what constitutes prompt injection, causing detection decisions to be implicit, opaque, and difficult to reason about. In this work, we propose AgentWatcher to address the above two limitations. To address the first limitation, AgentWatcher attributes the LLM's output (e.g., the action of an agent) to a small set of causally influential context segments. By focusing detection on a relatively short text, AgentWatcher can be scalable to long contexts. To address the second limitation, we define a set of rules specifying what does and does not constitute a prompt injection, and use a monitor LLM to reason over these rules based on the attributed text, making the detection decisions more explainable. We conduct a comprehensive evaluation on tool-use agent benchmarks and long-context understanding datasets. The experimental results demonstrate that AgentWatcher can effectively detect prompt injection and maintain utility without attacks. The code is available at https://github.com/wang-yanting/AgentWatcher.