Your Agent, Their Asset: A Real-World Safety Analysis of OpenClaw

๐Ÿ“… 2026-04-06
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Current safety evaluations of AI agents predominantly rely on sandboxed environments, which often fail to capture risks present in real-world deployments. This work addresses this gap by introducing the first safety evaluation framework tailored to realistic settings, focusing on OpenClawโ€”a personal AI agent with full local system privileges. The authors propose the CIK taxonomy (Capability, Identity, Knowledge) to uniformly characterize an agentโ€™s persistent state. Evaluating four leading large language models across twelve realistic attack scenarios, they demonstrate that compromising any single CIK dimension increases attack success rates from 24.6% to 64โ€“74%. Even under the strongest existing defenses, attacks targeting the capability dimension achieve a 63.8% success rate. The paper also introduces a file-protection mechanism that blocks 97% of malicious injections while preserving compatibility with legitimate updates.
๐Ÿ“ Abstract
OpenClaw, the most widely deployed personal AI agent in early 2026, operates with full local system access and integrates with sensitive services such as Gmail, Stripe, and the filesystem. While these broad privileges enable high levels of automation and powerful personalization, they also expose a substantial attack surface that existing sandboxed evaluations fail to capture. To address this gap, we present the first real-world safety evaluation of OpenClaw and introduce the CIK taxonomy, which unifies an agent's persistent state into three dimensions, i.e., Capability, Identity, and Knowledge, for safety analysis. Our evaluations cover 12 attack scenarios on a live OpenClaw instance across four backbone models (Claude Sonnet 4.5, Opus 4.6, Gemini 3.1 Pro, and GPT-5.4). The results show that poisoning any single CIK dimension increases the average attack success rate from 24.6% to 64-74%, with even the most robust model exhibiting more than a threefold increase over its baseline vulnerability. We further assess three CIK-aligned defense strategies alongside a file-protection mechanism; however, the strongest defense still yields a 63.8% success rate under Capability-targeted attacks, while file protection blocks 97% of malicious injections but also prevents legitimate updates. Taken together, these findings show that the vulnerabilities are inherent to the agent architecture, necessitating more systematic safeguards to secure personal AI agents. Our project page is https://ucsc-vlaa.github.io/CIK-Bench.
Problem

Research questions and friction points this paper is trying to address.

AI agent safety
real-world evaluation
attack surface
personal AI
system access
Innovation

Methods, ideas, or system contributions that make the work stand out.

CIK taxonomy
real-world safety evaluation
personal AI agent
attack surface analysis
systematic safeguards
๐Ÿ”Ž Similar Papers
No similar papers found.