🤖 AI Summary
This work addresses the challenge that existing AI agent benchmarks inadequately evaluate performance on real-world, complex, and long-horizon command-line tasks. To bridge this gap, the authors introduce a novel evaluation benchmark comprising 89 high-difficulty terminal tasks, all derived from authentic workflows and accompanied by isolated execution environments, human-authored reference solutions, and automated verification tests. The benchmark is designed to ensure realism, verifiability, and diversity, substantially narrowing the disparity between practical scenarios and current model evaluation paradigms. Experimental results demonstrate that even state-of-the-art agents achieve success rates below 65% on this benchmark. The paper further provides comprehensive error analysis and publicly releases the dataset and evaluation toolchain to support future research in this domain.
📝 Abstract
AI agents may soon become capable of autonomously completing valuable, long-horizon tasks in diverse domains. Current benchmarks either do not measure real-world tasks, or are not sufficiently difficult to meaningfully measure frontier models. To this end, we present Terminal-Bench 2.0: a carefully curated hard benchmark composed of 89 tasks in computer terminal environments inspired by problems from real workflows. Each task features a unique environment, human-written solution, and comprehensive tests for verification. We show that frontier models and agents score less than 65\% on the benchmark and conduct an error analysis to identify areas for model and agent improvement. We publish the dataset and evaluation harness to assist developers and researchers in future work at https://www.tbench.ai/ .