ST-WebAgentBench: A Benchmark for Evaluating Safety and Trustworthiness in Web Agents

๐Ÿ“… 2024-10-09
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 14
โœจ Influential: 4
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing web agent evaluation benchmarks focus solely on task completion rate, neglecting security and trustworthiness (ST), thereby hindering deployment in safety-critical enterprise settings. This paper introduces the first ST-oriented benchmark for enterprise-grade web agents, comprising 222 real-world tasks with explicit security policy constraints. It quantifies performance across six orthogonal dimensionsโ€”user authorization, robustness, policy adherence, transparency, accountability, and resilience. We propose a novel dual-metric framework: Completion Under Policy (CuP) and Risk Ratio, embedded within a policy-driven, configurable, and extensible evaluation paradigm. We further open-source a policy authoring interface and standardized assessment templates. Empirical evaluation reveals that mainstream open-source web agents achieve only 34% average CuP relative to nominal completion rates, exposing critical security gaps. The benchmark is publicly released to support rigorous, trustworthy deployment assessment.

Technology Category

Application Category

๐Ÿ“ Abstract
Autonomous web agents solve complex browsing tasks, yet existing benchmarks measure only whether an agent finishes a task, ignoring whether it does so safely or in a way enterprises can trust. To integrate these agents into critical workflows, safety and trustworthiness (ST) are prerequisite conditions for adoption. We introduce extbf{ extsc{ST-WebAgentBench}}, a configurable and easily extensible suite for evaluating web agent ST across realistic enterprise scenarios. Each of its 222 tasks is paired with ST policies, concise rules that encode constraints, and is scored along six orthogonal dimensions (e.g., user consent, robustness). Beyond raw task success, we propose the extit{Completion Under Policy} ( extit{CuP}) metric, which credits only completions that respect all applicable policies, and the extit{Risk Ratio}, which quantifies ST breaches across dimensions. Evaluating three open state-of-the-art agents reveals that their average CuP is less than two-thirds of their nominal completion rate, exposing critical safety gaps. By releasing code, evaluation templates, and a policy-authoring interface, href{https://sites.google.com/view/st-webagentbench/home}{ extsc{ST-WebAgentBench}} provides an actionable first step toward deploying trustworthy web agents at scale.
Problem

Research questions and friction points this paper is trying to address.

Evaluating web agent safety and trustworthiness beyond task completion
Measuring compliance with enterprise policies during web browsing tasks
Quantifying safety gaps in autonomous web agents' behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

Configurable benchmark suite for web agents
Completion Under Policy metric for safety
Risk Ratio quantifying trustworthiness breaches
๐Ÿ”Ž Similar Papers
No similar papers found.