🤖 AI Summary
This work addresses the critical vulnerability of high-privilege personal AI agents to prompt injection attacks, which can lead to credential theft, fund redirection, or file corruption—risks inadequately captured by existing evaluations that overlook real-world workflows and agent framework dynamics. To bridge this gap, we introduce ClawSafety, the first comprehensive safety benchmark for end-to-end agent systems, spanning five professional domains, three injection channels (skill instructions, emails, and web pages), and 120 adversarial scenarios, structured via a novel three-dimensional taxonomy encompassing harm domain, attack vector, and harmful action type. Through sandboxed testing, action trajectory analysis, and cross-framework experiments across five state-of-the-art LLMs (2,520 evaluations total), we find attack success rates ranging from 40% to 75%, with skill instructions posing the highest risk. While the strongest models significantly mitigate critical harms, overall safety proves highly dependent on the full deployment stack design.
📝 Abstract
Personal AI agents like OpenClaw run with elevated privileges on users' local machines, where a single successful prompt injection can leak credentials, redirect financial transactions, or destroy files. This threat goes well beyond conventional text-level jailbreaks, yet existing safety evaluations fall short: most test models in isolated chat settings, rely on synthetic environments, and do not account for how the agent framework itself shapes safety outcomes. We introduce CLAWSAFETY, a benchmark of 120 adversarial test scenarios organized along three dimensions (harm domain, attack vector, and harmful action type) and grounded in realistic, high-privilege professional workspaces spanning software engineering, finance, healthcare, law, and DevOps. Each test case embeds adversarial content in one of three channels the agent encounters during normal work: workspace skill files, emails from trusted senders, and web pages. We evaluate five frontier LLMs as agent backbones, running 2,520 sandboxed trials across all configurations. Attack success rates (ASR) range from 40\% to 75\% across models and vary sharply by injection vector, with skill instructions (highest trust) consistently more dangerous than email or web content. Action-trace analysis reveals that the strongest model maintains hard boundaries against credential forwarding and destructive actions, while weaker models permit both. Cross-scaffold experiments on three agent frameworks further demonstrate that safety is not determined by the backbone model alone but depends on the full deployment stack, calling for safety evaluation that treats model and framework as joint variables.