OpenAgentSafety: A Comprehensive Framework for Evaluating Real-World AI Agent Safety

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deploying AI agents in real-world settings poses significant security risks, yet existing evaluation methods are constrained by simulated environments, narrow-domain tasks, or abstracted tools—failing to capture authentic safety challenges. Method: We propose the first security evaluation framework for AI agents operating over realistic toolchains, covering eight critical risk categories and supporting 350+ multi-turn, multi-user tasks. The framework integrates actual interfaces—including web browsers, Bash shells, file systems, code execution environments, and messaging platforms—and introduces a novel hybrid judgment mechanism combining rule-based engines with LLM-as-judge for multidimensional safety assessment. It further enables scalable injection of tasks, tools, and adversarial strategies. Contribution/Results: Empirical evaluation reveals alarmingly high unsafe behavior rates (51.2%–72.7%) among mainstream large language models on security-sensitive tasks, underscoring the necessity of real-environment assessment and demonstrating the framework’s diagnostic utility for identifying and mitigating agent-level vulnerabilities.

Technology Category

Application Category

📝 Abstract
Recent advances in AI agents capable of solving complex, everyday tasks, from scheduling to customer service, have enabled deployment in real-world settings, but their possibilities for unsafe behavior demands rigorous evaluation. While prior benchmarks have attempted to assess agent safety, most fall short by relying on simulated environments, narrow task domains, or unrealistic tool abstractions. We introduce OpenAgentSafety, a comprehensive and modular framework for evaluating agent behavior across eight critical risk categories. Unlike prior work, our framework evaluates agents that interact with real tools, including web browsers, code execution environments, file systems, bash shells, and messaging platforms; and supports over 350 multi-turn, multi-user tasks spanning both benign and adversarial user intents. OpenAgentSafety is designed for extensibility, allowing researchers to add tools, tasks, websites, and adversarial strategies with minimal effort. It combines rule-based analysis with LLM-as-judge assessments to detect both overt and subtle unsafe behaviors. Empirical analysis of five prominent LLMs in agentic scenarios reveals unsafe behavior in 51.2% of safety-vulnerable tasks with Claude-Sonnet-3.7, to 72.7% with o3-mini, highlighting critical safety vulnerabilities and the need for stronger safeguards before real-world deployment.
Problem

Research questions and friction points this paper is trying to address.

Evaluating AI agent safety in real-world settings
Addressing limitations of prior simulated safety benchmarks
Detecting unsafe behaviors across diverse tools and tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates agents with real tools interaction
Supports 350+ multi-turn multi-user tasks
Combines rule-based and LLM-as-judge analysis
🔎 Similar Papers
No similar papers found.