π€ AI Summary
Current AI agent benchmarks suffer from ill-posed task formulations and biased reward mechanisms, leading to performance estimation errors of up to 100% and severely compromising evaluation validity and reliability. To address this, we propose the Agentic Benchmark Checklist (ABC)βthe first standardized, empirically grounded framework that systematically integrates benchmark construction expertise and best practices. Through case studies, defect diagnosis, and empirical validation, ABC identifies and rectifies critical design flaws in mainstream benchmarks such as CVE-Bench. Under ABC-guided revision, CVE-Benchβs performance overestimation rate decreases by 33%, markedly enhancing evaluation rigor and cross-benchmark comparability. This work establishes a reusable methodological foundation and practical standard for AI agent evaluation, advancing both benchmarking science and trustworthy agent assessment.
π Abstract
Benchmarks are essential for quantitatively tracking progress in AI. As AI agents become increasingly capable, researchers and practitioners have introduced agentic benchmarks to evaluate agents on complex, real-world tasks. These benchmarks typically measure agent capabilities by evaluating task outcomes via specific reward designs. However, we show that many agentic benchmarks have issues task setup or reward design. For example, SWE-bench Verified uses insufficient test cases, while TAU-bench counts empty responses as successful. Such issues can lead to under- or overestimation agents' performance by up to 100% in relative terms. To make agentic evaluation rigorous, we introduce the Agentic Benchmark Checklist (ABC), a set of guidelines that we synthesized from our benchmark-building experience, a survey of best practices, and previously reported issues. When applied to CVE-Bench, a benchmark with a particularly complex evaluation design, ABC reduces the performance overestimation by 33%.