MCP-Atlas: A Large-Scale Benchmark for Tool-Use Competency with Real MCP Servers

📅 2026-01-31
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing evaluation methods for assessing large language models’ ability to use external tools in complex real-world scenarios, which often suffer from oversimplified toolsets, rigid workflows, or subjective scoring. To this end, we present the first large-scale benchmark grounded in real Model Context Protocol (MCP) servers, encompassing 36 MCP services, 220 tools, and 1,000 multi-step natural language tasks that require agents to autonomously discover and orchestrate multiple tools. The evaluation employs a no-tool-name prompting strategy and a fine-grained, fact-based scoring mechanism, supported by a containerized framework and multidimensional diagnostic metrics—including tool discovery, parameterization, and error recovery. Experiments reveal that state-of-the-art models achieve pass rates exceeding 50%, with primary failure modes stemming from insufficient tool utilization and task comprehension errors. The benchmark framework, task schema, and a public subset of 500 tasks are openly released.

Technology Category

Application Category

📝 Abstract
The Model Context Protocol (MCP) is rapidly becoming the standard interface for Large Language Models (LLMs) to discover and invoke external tools. However, existing evaluations often fail to capture the complexity of real-world scenarios, relying on restricted toolsets, simplistic workflows, or subjective LLM-as-a-judge metrics. We introduce MCP-Atlas, a large-scale benchmark for evaluating tool-use competency, comprising 36 real MCP servers and 220 tools. It includes 1,000 tasks designed to assess tool-use competency in realistic, multi-step workflows. Tasks use natural language prompts that avoid naming specific tools or servers, requiring agents to identify and orchestrate 3-6 tool calls across multiple servers. We score tasks using a claims-based rubric that awards partial credit based on the factual claims satisfied in the model's final answer, complemented by internal diagnostics on tool discovery, parameterization, syntax, error recovery, and efficiency. Evaluation results on frontier models reveal that top models achieve pass rates exceeding 50%, with primary failures arising from inadequate tool usage and task understanding. We release the task schema, containerized harness, and a 500-task public subset of the benchmark dataset to facilitate reproducible comparisons and advance the development of robust, tool-augmented agents.
Problem

Research questions and friction points this paper is trying to address.

tool-use competency
large language models
Model Context Protocol
real-world evaluation
multi-step workflows
Innovation

Methods, ideas, or system contributions that make the work stand out.

MCP-Atlas
tool-use evaluation
Model Context Protocol
multi-step tool orchestration
claims-based scoring
🔎 Similar Papers
No similar papers found.
Chaithanya Bandi
Chaithanya Bandi
Kellogg School of Management, Northwestern University
B
Ben Hertzberg
Scale AI
G
Geobio Boo
Scale AI
T
Tejas Polakam
Scale AI
J
Jeff Da
Scale AI
S
Sami Hassaan
Scale AI
M
Manasi Sharma
Scale AI
A
Andrew Park
Scale AI
E
Ernesto Hernandez
Scale AI
D
Dan Rambado
Scale AI
I
Ivan Salazar
Scale AI
R
Rafael Cruz
Scale AI
C
Chetan Rane
Scale AI
B
Ben Levin
Scale AI
B
Brad Kenstler
Scale AI
B
Bing Liu
Scale AI