VIGIL: Defending LLM Agents Against Tool Stream Injection via Verify-Before-Commit

๐Ÿ“… 2026-01-09
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the vulnerability of large language model (LLM) agents in open environments to indirect prompt injection attacks through tool chains, a challenge that existing defenses struggle to mitigate without compromising reasoning flexibility. To this end, the authors propose VIGIL, a novel framework introducing the โ€œverify-then-submitโ€ paradigm. VIGIL integrates intent-anchored verification with speculative hypothesis generation, intent consistency checking, dynamic dependency modeling, and tool-flow monitoring to effectively block injection attacks while preserving the agentโ€™s adaptive reasoning capabilities. The framework is evaluated on SIREN, a newly developed benchmark for assessing security and utility under attack. Experimental results demonstrate that VIGIL reduces attack success rates by over 22% on SIREN and more than doubles task utility under adversarial conditions compared to static baselines, significantly outperforming current dynamic defense approaches.

Technology Category

Application Category

๐Ÿ“ Abstract
LLM agents operating in open environments face escalating risks from indirect prompt injection, particularly within the tool stream where manipulated metadata and runtime feedback hijack execution flow. Existing defenses encounter a critical dilemma as advanced models prioritize injected rules due to strict alignment while static protection mechanisms sever the feedback loop required for adaptive reasoning. To reconcile this conflict, we propose \textbf{VIGIL}, a framework that shifts the paradigm from restrictive isolation to a verify-before-commit protocol. By facilitating speculative hypothesis generation and enforcing safety through intent-grounded verification, \textbf{VIGIL} preserves reasoning flexibility while ensuring robust control. We further introduce \textbf{SIREN}, a benchmark comprising 959 tool stream injection cases designed to simulate pervasive threats characterized by dynamic dependencies. Extensive experiments demonstrate that \textbf{VIGIL} outperforms state-of-the-art dynamic defenses by reducing the attack success rate by over 22\% while more than doubling the utility under attack compared to static baselines, thereby achieving an optimal balance between security and utility.
Problem

Research questions and friction points this paper is trying to address.

LLM agents
tool stream injection
indirect prompt injection
security
adaptive reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

tool stream injection
verify-before-commit
intent-grounded verification
LLM agent security
speculative hypothesis generation
๐Ÿ”Ž Similar Papers
No similar papers found.
J
Junda Lin
University of Science and Technology of China, State Key Laboratory of Cognitive Intelligence, Hefei, China
Z
Zhaomeng Zhou
University of Science and Technology of China, State Key Laboratory of Cognitive Intelligence, Hefei, China
Z
Zhi Zheng
University of Science and Technology of China, State Key Laboratory of Cognitive Intelligence, Hefei, China
Shuochen Liu
Shuochen Liu
University of Science and Technology of China
Large Language Model
Tong Xu
Tong Xu
Professor, University of Science and Technology of China
Data Mining
Y
Yong Chen
North Automatic Control Technology Research Institute
Enhong Chen
Enhong Chen
University of Science and Technology of China
data miningrecommender systemmachine learning