🤖 AI Summary
Existing API agent benchmarks inadequately assess large language models’ (LLMs) reasoning capabilities in realistic, complex scenarios. Method: We introduce the first large-scale, multidimensionally challenging API agent benchmark aligned with real-world requirements—featuring thousands of authentic vendor APIs (e.g., Apple), fine-grained user queries, and human-annotated high-quality action sequences with precise parameter specifications. It enables systematic evaluation across the full pipeline: API selection, dynamic parameter grounding, and human-AI collaborative input handling. We propose a novel evaluation paradigm integrating multidimensional difficulty modeling and real-world task alignment, conducting reproducible experiments on ten state-of-the-art open- and closed-weight LLMs (≥57B parameters). Contribution/Results: Empirical analysis reveals that current API agents achieve <40% success rate on complex queries; cross-API orchestration and context-sensitive parameter inference remain critical bottlenecks. All data, code, and execution logs are publicly released.
📝 Abstract
Recent advancements in integrating large language models (LLMs) with application programming interfaces (APIs) have gained significant interest in both academia and industry. Recent work demonstrates that these API-based agents exhibit relatively strong autonomy and planning capabilities. However, their ability to handle multi-dimensional difficulty levels, diverse task types, and real-world demands remains unknown. In this paper, we introduce extsc{ShortcutsBench}, a large-scale benchmark for the comprehensive evaluation of API-based agents in solving real-world complex tasks. extsc{ShortcutsBench} includes a wealth of real APIs from Apple Inc., refined user queries, human-annotated high-quality action sequences, detailed parameter filling values, and parameters requesting necessary input from the system or user. We revealed how existing benchmarks~/~datasets struggle to accommodate the advanced reasoning capabilities of existing more intelligent LLMs. Moreover, our extensive evaluation of agents built with $5$ leading open-source (size $geq$ 57B) and $5$ closed-source LLMs (e.g. Gemini-1.5-Pro and GPT-4o-mini) with varying intelligence level reveals significant limitations of existing API-based agents in the whole process of handling complex queries related to API selection, parameter filling, and requesting necessary input from the system and the user. These findings highlight the great challenges that API-based agents face in effectively fulfilling real and complex user queries. All datasets, code, experimental logs, and results are available at url{https://github.com/EachSheep/ShortcutsBench}.