🤖 AI Summary
This work addresses a critical limitation in existing safety mechanisms for large language model (LLM) agents, which often neglect the contextual dependence of agent behaviors and struggle to balance safety with utility. The authors formally define context-sensitive safety through four verifiable properties: task alignment, action alignment, source authorization, and data isolation. They introduce oracle functions to dynamically detect violations of these properties during execution. By integrating formal modeling, information flow control, and runtime verification, the framework provides contextualized definitions for attacks such as prompt injection and jailbreaking, and unifies existing defense strategies under a coherent theoretical structure. Experimental results demonstrate that diverse known attacks can be precisely characterized as instances of property violations, establishing a foundational basis for designing LLM agent defenses that jointly ensure safety and practical effectiveness.
📝 Abstract
Security in LLM agents is inherently contextual. For example, the same action taken by an agent may represent legitimate behavior or a security violation depending on whose instruction led to the action, what objective is being pursued, and whether the action serves that objective. However, existing definitions of security attacks against LLM agents often fail to capture this contextual nature. As a result, defenses face a fundamental utility-security tradeoff: applying defenses uniformly across all contexts can lead to significant utility loss, while applying defenses in insufficient or inappropriate contexts can result in security vulnerabilities. In this work, we present a framework that systematizes existing attacks and defenses from the perspective of contextual security. To this end, we propose four security properties that capture contextual security for LLM agents: task alignment (pursuing authorized objectives), action alignment (individual actions serving those objectives), source authorization (executing commands from authenticated sources), and data isolation (ensuring information flows respect privilege boundaries). We further introduce a set of oracle functions that enable verification of whether these security properties are violated as an agent executes a user task. Using this framework, we reformalize existing attacks, such as indirect prompt injection, direct prompt injection, jailbreak, task drift, and memory poisoning, as violations of one or more security properties, thereby providing precise and contextual definitions of these attacks. Similarly, we reformalize defenses as mechanisms that strengthen oracle functions or perform security property checks. Finally, we discuss several important future research directions enabled by our framework.