๐ค AI Summary
This work addresses the vulnerability of large language model (LLM) agents in open environments to indirect prompt injection attacks through tool chains, a challenge that existing defenses struggle to mitigate without compromising reasoning flexibility. To this end, the authors propose VIGIL, a novel framework introducing the โverify-then-submitโ paradigm. VIGIL integrates intent-anchored verification with speculative hypothesis generation, intent consistency checking, dynamic dependency modeling, and tool-flow monitoring to effectively block injection attacks while preserving the agentโs adaptive reasoning capabilities. The framework is evaluated on SIREN, a newly developed benchmark for assessing security and utility under attack. Experimental results demonstrate that VIGIL reduces attack success rates by over 22% on SIREN and more than doubles task utility under adversarial conditions compared to static baselines, significantly outperforming current dynamic defense approaches.
๐ Abstract
LLM agents operating in open environments face escalating risks from indirect prompt injection, particularly within the tool stream where manipulated metadata and runtime feedback hijack execution flow. Existing defenses encounter a critical dilemma as advanced models prioritize injected rules due to strict alignment while static protection mechanisms sever the feedback loop required for adaptive reasoning. To reconcile this conflict, we propose \textbf{VIGIL}, a framework that shifts the paradigm from restrictive isolation to a verify-before-commit protocol. By facilitating speculative hypothesis generation and enforcing safety through intent-grounded verification, \textbf{VIGIL} preserves reasoning flexibility while ensuring robust control. We further introduce \textbf{SIREN}, a benchmark comprising 959 tool stream injection cases designed to simulate pervasive threats characterized by dynamic dependencies. Extensive experiments demonstrate that \textbf{VIGIL} outperforms state-of-the-art dynamic defenses by reducing the attack success rate by over 22\% while more than doubling the utility under attack compared to static baselines, thereby achieving an optimal balance between security and utility.