๐ค AI Summary
Existing web agent evaluation benchmarks focus solely on task completion rate, neglecting security and trustworthiness (ST), thereby hindering deployment in safety-critical enterprise settings. This paper introduces the first ST-oriented benchmark for enterprise-grade web agents, comprising 222 real-world tasks with explicit security policy constraints. It quantifies performance across six orthogonal dimensionsโuser authorization, robustness, policy adherence, transparency, accountability, and resilience. We propose a novel dual-metric framework: Completion Under Policy (CuP) and Risk Ratio, embedded within a policy-driven, configurable, and extensible evaluation paradigm. We further open-source a policy authoring interface and standardized assessment templates. Empirical evaluation reveals that mainstream open-source web agents achieve only 34% average CuP relative to nominal completion rates, exposing critical security gaps. The benchmark is publicly released to support rigorous, trustworthy deployment assessment.
๐ Abstract
Autonomous web agents solve complex browsing tasks, yet existing benchmarks measure only whether an agent finishes a task, ignoring whether it does so safely or in a way enterprises can trust. To integrate these agents into critical workflows, safety and trustworthiness (ST) are prerequisite conditions for adoption. We introduce extbf{ extsc{ST-WebAgentBench}}, a configurable and easily extensible suite for evaluating web agent ST across realistic enterprise scenarios. Each of its 222 tasks is paired with ST policies, concise rules that encode constraints, and is scored along six orthogonal dimensions (e.g., user consent, robustness). Beyond raw task success, we propose the extit{Completion Under Policy} ( extit{CuP}) metric, which credits only completions that respect all applicable policies, and the extit{Risk Ratio}, which quantifies ST breaches across dimensions. Evaluating three open state-of-the-art agents reveals that their average CuP is less than two-thirds of their nominal completion rate, exposing critical safety gaps. By releasing code, evaluation templates, and a policy-authoring interface, href{https://sites.google.com/view/st-webagentbench/home}{ extsc{ST-WebAgentBench}} provides an actionable first step toward deploying trustworthy web agents at scale.