🤖 AI Summary
Existing language agent benchmarks are largely confined to narrow, simplified tasks, lacking diversity, real-world environmental fidelity, and long-horizon interactive capabilities—thus failing to reflect practical deployment performance. Method: We introduce Toolathlon, the first benchmark for language agents designed explicitly for realistic, multi-application scenarios. It encompasses 32 authentic software platforms (e.g., Google Calendar, Notion, Kubernetes, BigQuery) and 604 executable tools, enabling 108 cross-application, long-horizon tasks. Toolathlon leverages the Model Context Protocol (MCP) for high-fidelity tool interfacing, employs real-world state initialization, and implements verifiable, automated evaluation. Contribution/Results: Empirical evaluation reveals a substantial capability gap: the strongest closed-source model, Claude-4.5-Sonnet, achieves only 38.6% task success rate; the top open-source model, DeepSeek-V3.2-Exp, attains just 20.1%. These results underscore the significant challenges language agents face in real-world, production-grade settings.
📝 Abstract
Real-world language agents must handle complex, multi-step workflows across diverse Apps. For instance, an agent may manage emails by coordinating with calendars and file systems, or monitor a production database to detect anomalies and generate reports following an operating manual. However, existing language agent benchmarks often focus on narrow domains or simplified tasks that lack the diversity, realism, and long-horizon complexity required to evaluate agents' real-world performance. To address this gap, we introduce the Tool Decathlon (dubbed as Toolathlon), a benchmark for language agents offering diverse Apps and tools, realistic environment setup, and reliable execution-based evaluation. Toolathlon spans 32 software applications and 604 tools, ranging from everyday platforms such as Google Calendar and Notion to professional ones like WooCommerce, Kubernetes, and BigQuery. Most of the tools are based on a high-quality set of Model Context Protocol (MCP) servers that we may have revised or implemented ourselves. Unlike prior works, which primarily ensure functional realism but offer limited environment state diversity, we provide realistic initial environment states from real software, such as Canvas courses with dozens of students or real financial spreadsheets. This benchmark includes 108 manually sourced or crafted tasks in total, requiring interacting with multiple Apps over around 20 turns on average to complete. Each task is strictly verifiable through dedicated evaluation scripts. Comprehensive evaluation of SOTA models highlights their significant shortcomings: the best-performing model, Claude-4.5-Sonnet, achieves only a 38.6% success rate with 20.2 tool calling turns on average, while the top open-weights model DeepSeek-V3.2-Exp reaches 20.1%. We expect Toolathlon to drive the development of more capable language agents for real-world, long-horizon task execution.