Claw-Eval: Toward Trustworthy Evaluation of Autonomous Agents

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current agent evaluation practices commonly suffer from overreliance on final outputs, insufficient attention to safety and robustness, and limited coverage of modalities and interaction scenarios. To address these limitations, this work proposes Claw-Eval, an end-to-end evaluation framework encompassing 300 human-verified tasks. It introduces, for the first time, a trajectory-aware scoring mechanism that captures behavioral data through three channels: execution trajectories, audit logs, and environment snapshots, enabling hybrid automated and human assessment against 2,159 fine-grained criteria. The framework defines a tripartite metric structure—completeness, safety, and robustness—and introduces Pass@k and Pass^k to distinguish genuine capability from accidental success. Experiments reveal that conventional methods miss 44% of safety violations and 13% of robustness failures, while multimodal performance varies significantly, with no single model dominating across all dimensions.
📝 Abstract
Large language models are increasingly deployed as autonomous agents executing multi-step workflows in real-world software environments. However, existing agent benchmarks suffer from three critical limitations: (1) trajectory-opaque grading that checks only final outputs, (2) underspecified safety and robustness evaluation, and (3) narrow modality coverage and interaction paradigms. We introduce Claw-Eval, an end-to-end evaluation suite addressing all three gaps. It comprises 300 human-verified tasks spanning 9 categories across three groups (general service orchestration, multimodal perception and generation, and multi-turn professional dialogue). Every agent action is recorded through three independent evidence channels (execution traces, audit logs, and environment snapshots), enabling trajectory-aware grading over 2,159 fine-grained rubric items. The scoring protocol evaluates Completion, Safety, and Robustness, reporting Average Score, Pass@k, and Pass^k across three trials to distinguish genuine capability from lucky outcomes. Experiments on 14 frontier models reveal that: (1) trajectory-opaque evaluation is systematically unreliable, missing 44% of safety violations and 13% of robustness failures that our hybrid pipeline catches; (2) controlled error injection primarily degrades consistency rather than peak capability, with Pass^3 dropping up to 24% while Pass@3 remains stable; (3) multimodal performance varies sharply, with most models performing poorer on video than on document or image, and no single model dominating across all modalities. Beyond benchmarking, Claw-Eval highlights actionable directions for agent development, shedding light on what it takes to build agents that are not only capable but reliably deployable.
Problem

Research questions and friction points this paper is trying to address.

autonomous agents
evaluation benchmark
safety
robustness
multimodal
Innovation

Methods, ideas, or system contributions that make the work stand out.

trajectory-aware evaluation
multimodal agent benchmarking
safety and robustness assessment
execution trace auditing
Pass^k scoring
🔎 Similar Papers
No similar papers found.
B
Bowen Ye
State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University
R
Rang Li
State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University
Q
Qibin Yang
State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University
Yuanxin Liu
Yuanxin Liu
Peking University
Natural Language Processing
Linli Yao
Linli Yao
Peking University
multi-modal semantic understanding
H
Hanglong Lv
State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University
Zhihui Xie
Zhihui Xie
University of Hong Kong, Shanghai Jiao Tong University
Natural Language ProcessingReinforcement LearningAlignment
Chenxin An
Chenxin An
The University of Hong Kong
Long-context LLMs
Lei Li
Lei Li
University of Hong Kong, Peking University
Large Language ModelsVision-Language Learning
Lingpeng Kong
Lingpeng Kong
Google DeepMind, The University of Hong Kong
Natural Language ProcessingMachine Learning
Qi Liu
Qi Liu
University of Hong Kong, Reka AI
Natural Language ProcessingMachine LearningArtificial General Intelligence
Z
Zhifang Sui
State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University
Tong Yang
Tong Yang
Peking University, Beijing, China. PKU. 北京大学
SketchNetwork measurementBloom filterIP lookupHash Table