🤖 AI Summary
This study evaluates the safety risks of Clawdbot, a personal AI agent with broad action capabilities, under ambiguous user intent and adversarial prompting. Focusing on its tool-use scenarios, the work introduces the first customized safety test cases and proposes a trajectory-level, fine-grained safety evaluation framework that integrates an automated judging model (AgentDoG-Qwen3-4B), human review, and interaction trajectories adapted from ATBench and LPS-Bench. Analysis across 34 representative cases reveals that while Clawdbot demonstrates robust performance in reliability-oriented tasks, it exhibits non-uniform safety vulnerabilities in open-ended or intent-ambiguous settings, where minor misunderstandings can trigger high-impact tool operations.
📝 Abstract
Clawdbot is a self-hosted, tool-using personal AI agent with a broad action space spanning local execution and web-mediated workflows, which raises heightened safety and security concerns under ambiguity and adversarial steering. We present a trajectory-centric evaluation of Clawdbot across six risk dimensions. Our test suite samples and lightly adapts scenarios from prior agent-safety benchmarks (including ATBench and LPS-Bench) and supplements them with hand-designed cases tailored to Clawdbot's tool surface. We log complete interaction trajectories (messages, actions, tool-call arguments/outputs) and assess safety using both an automated trajectory judge (AgentDoG-Qwen3-4B) and human review. Across 34 canonical cases, we find a non-uniform safety profile: performance is generally consistent on reliability-focused tasks, while most failures arise under underspecified intent, open-ended goals, or benign-seeming jailbreak prompts, where minor misinterpretations can escalate into higher-impact tool actions. We supplemented the overall results with representative case studies and summarized the commonalities of these cases, analyzing the security vulnerabilities and typical failure modes that Clawdbot is prone to trigger in practice.