π€ AI Summary
This work addresses the vulnerability of large language model (LLM) agents to indirect prompt injection (IPI) attacks, wherein adversaries hijack agent execution by poisoning tool outputs. The authors propose a novel action-level causal attribution paradigm and introduce the first runtime defense mechanism grounded in causal reasoning. By determining whether a tool invocation stems from the userβs genuine intent rather than contaminated observations, this approach overcomes the limited generalizability of conventional semantic discrimination methods. The defense integrates parallel counterfactual testing, teacher-forced shadow replay, hierarchical control attenuation, and a fuzzy survival criterion. Evaluated across four mainstream LLMs and two agent benchmarks, the method achieves 0% attack success under static IPI attacks with negligible utility loss and significantly outperforms existing defenses against adaptive attacks.
π Abstract
LLM agents are highly vulnerable to Indirect Prompt Injection (IPI), where adversaries embed malicious directives in untrusted tool outputs to hijack execution. Most existing defenses treat IPI as an input-level semantic discrimination problem, which often fails to generalize to unseen payloads. We propose a new paradigm, action-level causal attribution, which secures agents by asking why a particular tool call is produced. The central goal is to distinguish tool calls supported by the user's intent from those causally driven by untrusted observations. We instantiate this paradigm with AttriGuard, a runtime defense based on parallel counterfactual tests. For each proposed tool call, AttriGuard verifies its necessity by re-executing the agent under a control-attenuated view of external observations. Technically, AttriGuard combines teacher-forced shadow replay to prevent attribution confounding, hierarchical control attenuation to suppress diverse control channels while preserving task-relevant information, and a fuzzy survival criterion that is robust to LLM stochasticity. Across four LLMs and two agent benchmarks, AttriGuard achieves 0% ASR under static attacks with negligible utility loss and moderate overhead. Importantly, it remains resilient under adaptive optimization-based attacks in settings where leading defenses degrade significantly.