🤖 AI Summary
This work addresses the limitations of existing evaluation methods for assessing large language models’ ability to use external tools in complex real-world scenarios, which often suffer from oversimplified toolsets, rigid workflows, or subjective scoring. To this end, we present the first large-scale benchmark grounded in real Model Context Protocol (MCP) servers, encompassing 36 MCP services, 220 tools, and 1,000 multi-step natural language tasks that require agents to autonomously discover and orchestrate multiple tools. The evaluation employs a no-tool-name prompting strategy and a fine-grained, fact-based scoring mechanism, supported by a containerized framework and multidimensional diagnostic metrics—including tool discovery, parameterization, and error recovery. Experiments reveal that state-of-the-art models achieve pass rates exceeding 50%, with primary failure modes stemming from insufficient tool utilization and task comprehension errors. The benchmark framework, task schema, and a public subset of 500 tasks are openly released.
📝 Abstract
The Model Context Protocol (MCP) is rapidly becoming the standard interface for Large Language Models (LLMs) to discover and invoke external tools. However, existing evaluations often fail to capture the complexity of real-world scenarios, relying on restricted toolsets, simplistic workflows, or subjective LLM-as-a-judge metrics. We introduce MCP-Atlas, a large-scale benchmark for evaluating tool-use competency, comprising 36 real MCP servers and 220 tools. It includes 1,000 tasks designed to assess tool-use competency in realistic, multi-step workflows. Tasks use natural language prompts that avoid naming specific tools or servers, requiring agents to identify and orchestrate 3-6 tool calls across multiple servers. We score tasks using a claims-based rubric that awards partial credit based on the factual claims satisfied in the model's final answer, complemented by internal diagnostics on tool discovery, parameterization, syntax, error recovery, and efficiency. Evaluation results on frontier models reveal that top models achieve pass rates exceeding 50%, with primary failures arising from inadequate tool usage and task understanding. We release the task schema, containerized harness, and a 500-task public subset of the benchmark dataset to facilitate reproducible comparisons and advance the development of robust, tool-augmented agents.