🤖 AI Summary
This work identifies a novel risk: tool-augmented large language model (LLM) agents dynamically leak users’ personal data during task execution due to prompt injection attacks. We propose a data-flow-guided prompt injection method targeting a fictional banking agent, enabling the first systematic modeling of data leakage paths through observed intermediate states in task execution. We introduce Banking-Conv—the first synthetic dialogue dataset explicitly designed for agent-level data exfiltration—and extend the AgentDojo security benchmark. Our attack achieves ~20% success rate across 16 tasks and 15% on average across 48 tasks. While password-only exfiltration remains low, leakage probability increases significantly when passwords co-occur with other sensitive attributes (e.g., account numbers, names). Crucially, built-in defenses of leading LLMs—including GPT-4, Claude, and Llama—fail to fully mitigate this threat, underscoring a critical gap in current agent security.
📝 Abstract
Previous benchmarks on prompt injection in large language models (LLMs) have primarily focused on generic tasks and attacks, offering limited insights into more complex threats like data exfiltration. This paper examines how prompt injection can cause tool-calling agents to leak personal data observed during task execution. Using a fictitious banking agent, we develop data flow-based attacks and integrate them into AgentDojo, a recent benchmark for agentic security. To enhance its scope, we also create a richer synthetic dataset of human-AI banking conversations. In 16 user tasks from AgentDojo, LLMs show a 15-50 percentage point drop in utility under attack, with average attack success rates (ASR) around 20 percent; some defenses reduce ASR to zero. Most LLMs, even when successfully tricked by the attack, avoid leaking highly sensitive data like passwords, likely due to safety alignments, but they remain vulnerable to disclosing other personal data. The likelihood of password leakage increases when a password is requested along with one or two additional personal details. In an extended evaluation across 48 tasks, the average ASR is around 15 percent, with no built-in AgentDojo defense fully preventing leakage. Tasks involving data extraction or authorization workflows, which closely resemble the structure of exfiltration attacks, exhibit the highest ASRs, highlighting the interaction between task type, agent performance, and defense efficacy.