🤖 AI Summary
This work addresses the challenge of ensuring that large language model (LLM) agents faithfully adhere to user intent, given their open-ended functionality and execution uncertainty, which pose significant risks to safety, privacy, and reliability. The paper introduces a novel behavior-bound modeling approach grounded in execution provenance. By mining frequent execution trajectories and constructing behavioral norms aligned with user intent, the method enables real-time interception of out-of-bound tool invocations. This framework effectively confines agent behavior within predefined operational boundaries. Empirical evaluation demonstrates that the approach blocks over 90% of boundary-violating attacks while preserving 98% of system utility, substantially enhancing both auditability and security of LLM agent systems.
📝 Abstract
Agentic computing systems, which autonomously spawn new functionalities based on natural language instructions, are becoming increasingly prevalent. While immensely capable, these systems raise serious security, privacy, and safety concerns. Fundamentally, the full set of functionalities offered by these systems, combined with their probabilistic execution flows, is not known beforehand. Given this lack of characterization, it is non-trivial to validate whether a system has successfully carried out the user's intended task or instead executed irrelevant actions, potentially as a consequence of compromise. In this paper, we propose Agent-Sentry, a framework that attempts to bound agentic systems to address this problem. Our key insight is that agentic systems are designed for specific use cases and therefore need not expose unbounded or unspecified functionalities. Once bounded, these systems become easier to scrutinize. Agent-Sentry operationalizes this insight by uncovering frequent functionalities offered by an agentic system, along with their execution traces, to construct behavioral bounds. It then learns a policy from these traces and blocks tool calls that deviate from learned behaviors or that misalign with user intent. Our evaluation shows that Agent-Sentry helps prevent over 90\% of attacks that attempt to trigger out-of-bounds executions, while preserving up to 98\% of system utility.