AgentSentry: Mitigating Indirect Prompt Injection in LLM Agents via Temporal Causal Diagnostics and Context Purification

📅 2026-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of large language model (LLM) agents to indirect prompt injection (IPI) attacks, wherein adversaries subtly hijack agent behavior across multi-turn interactions by poisoning external tool responses. The paper presents the first formalization of such attacks as a temporal causal takeover process and introduces a runtime defense framework. This framework identifies the takeover point through controlled counterfactual re-execution at tool-return boundaries and employs a causality-guided context purification mechanism to eliminate attack-induced biases while preserving task-relevant evidence. Evaluated on the AgentDojo benchmark, the method achieves an average utility of 74.55% under attack—outperforming the strongest baseline by 20.8 to 33.6 percentage points—without compromising performance in benign scenarios.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) agents increasingly rely on external tools and retrieval systems to autonomously complete complex tasks. However, this design exposes agents to indirect prompt injection (IPI), where attacker-controlled context embedded in tool outputs or retrieved content silently steers agent actions away from user intent. Unlike prompt-based attacks, IPI unfolds over multi-turn trajectories, making malicious control difficult to disentangle from legitimate task execution. Existing inference-time defenses primarily rely on heuristic detection and conservative blocking of high-risk actions, which can prematurely terminate workflows or broadly suppress tool usage under ambiguous multi-turn scenarios. We propose AgentSentry, a novel inference-time detection and mitigation framework for tool-augmented LLM agents. To the best of our knowledge, AgentSentry is the first inference-time defense to model multi-turn IPI as a temporal causal takeover. It localizes takeover points via controlled counterfactual re-executions at tool-return boundaries and enables safe continuation through causally guided context purification that removes attack-induced deviations while preserving task-relevant evidence. We evaluate AgentSentry on the \textsc{AgentDojo} benchmark across four task suites, three IPI attack families, and multiple black-box LLMs. AgentSentry eliminates successful attacks and maintains strong utility under attack, achieving an average Utility Under Attack (UA) of 74.55 %, improving UA by 20.8 to 33.6 percentage points over the strongest baselines without degrading benign performance.
Problem

Research questions and friction points this paper is trying to address.

Indirect Prompt Injection
LLM Agents
Tool-Augmented Systems
Multi-turn Trajectories
Adversarial Context
Innovation

Methods, ideas, or system contributions that make the work stand out.

indirect prompt injection
temporal causal diagnostics
context purification
counterfactual re-execution
LLM agents
🔎 Similar Papers
No similar papers found.
T
Tian Zhang
Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan 430072, China
Yiwei Xu
Yiwei Xu
College of Information, University of Maryland; Information School, University of Washington
social data sciencehuman-centered AIhealth informaticsinformation behaviors
J
Juan Wang
Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan 430072, China
Keyan Guo
Keyan Guo
Ph.D. Candidate, Computer Science and Engineering, University at Buffalo, New York, United States
Generative AIAI Safety & SecurityAI for Good
Xiaoyang Xu
Xiaoyang Xu
New Jersey Institute of Technology
biomaterialsnanomedicinedrug deliverynanotechnologytissue engineering
B
Bowen Xiao
Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan 430072, China
Quanlong Guan
Quanlong Guan
Jinan University
Multimodal LearningRepresentation learningRecommendation SystemAI in education
J
Jinlin Fan
Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan 430072, China
Jiawei Liu
Jiawei Liu
Wuhan University
Information RetrievalContent SecurityDocument Intelligence
Z
Zhiquan Liu
College of Cyber Security, Jinan University, Guangzhou 510632, China
Hongxin Hu
Hongxin Hu
Professor of Computer Science, University at Buffalo, SUNY
SecurityPrivacyNFV/SDN/5GAIIoT