ShortcutsBench: A Large-Scale Real-world Benchmark for API-based Agents

📅 2024-06-28
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing API agent benchmarks inadequately assess large language models’ (LLMs) reasoning capabilities in realistic, complex scenarios. Method: We introduce the first large-scale, multidimensionally challenging API agent benchmark aligned with real-world requirements—featuring thousands of authentic vendor APIs (e.g., Apple), fine-grained user queries, and human-annotated high-quality action sequences with precise parameter specifications. It enables systematic evaluation across the full pipeline: API selection, dynamic parameter grounding, and human-AI collaborative input handling. We propose a novel evaluation paradigm integrating multidimensional difficulty modeling and real-world task alignment, conducting reproducible experiments on ten state-of-the-art open- and closed-weight LLMs (≥57B parameters). Contribution/Results: Empirical analysis reveals that current API agents achieve <40% success rate on complex queries; cross-API orchestration and context-sensitive parameter inference remain critical bottlenecks. All data, code, and execution logs are publicly released.

Technology Category

Application Category

📝 Abstract
Recent advancements in integrating large language models (LLMs) with application programming interfaces (APIs) have gained significant interest in both academia and industry. Recent work demonstrates that these API-based agents exhibit relatively strong autonomy and planning capabilities. However, their ability to handle multi-dimensional difficulty levels, diverse task types, and real-world demands remains unknown. In this paper, we introduce extsc{ShortcutsBench}, a large-scale benchmark for the comprehensive evaluation of API-based agents in solving real-world complex tasks. extsc{ShortcutsBench} includes a wealth of real APIs from Apple Inc., refined user queries, human-annotated high-quality action sequences, detailed parameter filling values, and parameters requesting necessary input from the system or user. We revealed how existing benchmarks~/~datasets struggle to accommodate the advanced reasoning capabilities of existing more intelligent LLMs. Moreover, our extensive evaluation of agents built with $5$ leading open-source (size $geq$ 57B) and $5$ closed-source LLMs (e.g. Gemini-1.5-Pro and GPT-4o-mini) with varying intelligence level reveals significant limitations of existing API-based agents in the whole process of handling complex queries related to API selection, parameter filling, and requesting necessary input from the system and the user. These findings highlight the great challenges that API-based agents face in effectively fulfilling real and complex user queries. All datasets, code, experimental logs, and results are available at url{https://github.com/EachSheep/ShortcutsBench}.
Problem

Research questions and friction points this paper is trying to address.

Evaluate API-based agents' real-world task handling.
Assess multi-dimensional difficulty and diverse task types.
Reveal limitations in API selection and parameter filling.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale API-based benchmark
Real-world complex task evaluation
Open and closed-source LLMs comparison
🔎 Similar Papers
No similar papers found.
H
Haiyang Shen
Institute for Artificial Intelligence, Peking University
Y
Yue Li
School of Software & Microelectronics, Peking University
D
Desong Meng
School of Electronics Engineering and Computer Science, Peking University
D
Dongqi Cai
Beijing University of Posts and Telecommunications
Sheng Qi
Sheng Qi
Ph.D. student, Peking University
serverless computing
L
Li Zhang
Beijing University of Posts and Telecommunications
M
Mengwei Xu
Beijing University of Posts and Telecommunications
Yun Ma
Yun Ma
Assistant Professor, Peking University
WebMobile ComputingSoftware EngineeringService