LiveMCPBench: Can Agents Navigate an Ocean of MCP Tools?

📅 2025-08-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing MCP benchmarks are limited to single-server settings with few tools, failing to evaluate agent capabilities in large-scale, realistic multi-server environments. Method: We introduce LiveMCPBench—the first comprehensive benchmark for large-scale MCP ecosystems—comprising 95 real-world tasks and 70 heterogeneous MCP servers. We design LiveMCPEval, an LLM-as-a-judge–based automated evaluation framework, and develop the MCP Copilot Agent, supporting cross-server invocation and dynamic planning. Additionally, we curate LiveMCPTool, a deployable toolkit integrating 527 production-grade tools. Contribution/Results: Evaluation across 10 mainstream LLMs reveals that Claude-Sonnet-4 achieves the highest task success rate of 78.95%, exposing substantial performance disparities among models in complex, dynamic tool-augmented environments. LiveMCPBench thus establishes a rigorous, scalable foundation for benchmarking and advancing MCP agents.

Technology Category

Application Category

📝 Abstract
With the rapid development of Model Context Protocol (MCP), the number of MCP servers has surpassed 10,000. However, existing MCP benchmarks are limited to single-server settings with only a few tools, hindering effective evaluation of agent capabilities in large-scale, real-world scenarios. To address this limitation, we present LiveMCPBench, the first comprehensive benchmark comprising 95 real-world tasks grounded in the MCP ecosystem, designed to evaluate LLM agents at scale across diverse servers. To support a scalable and reproducible evaluation pipeline in large-scale MCP environments, we curate LiveMCPTool, a diverse and readily deployable collection of 70 MCP servers and 527 tools. Furthermore, we introduce LiveMCPEval, an LLM-as-a-Judge framework that enables automated and adaptive evaluation in dynamic, time-varying task environments, achieving 81% agreement with human reviewers. Finally, we propose the MCP Copilot Agent, a multi-step agent that routes tools for dynamic planning and executes tools for API interaction across the entire LiveMCPTool suite. Our evaluation covers 10 leading models, with the best-performing model (Claude-Sonnet-4) reaching a 78.95% success rate. However, we observe large performance variance across models, and several widely-used models perform poorly in LiveMCPBench's complex, tool-rich environments. Overall, LiveMCPBench offers the first unified framework for benchmarking LLM agents in realistic, tool-rich, and dynamic MCP environments, laying a solid foundation for scalable and reproducible research on agent capabilities. Our code and data will be publicly available at https://icip-cas.github.io/LiveMCPBench.
Problem

Research questions and friction points this paper is trying to address.

Evaluating agent capabilities in large-scale MCP environments
Addressing limitations of single-server MCP benchmarks
Automating adaptive evaluation in dynamic task environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

LiveMCPBench evaluates agents across diverse MCP servers
LiveMCPTool provides 70 deployable MCP servers and tools
LiveMCPEval automates adaptive evaluation with LLM-as-a-Judge
🔎 Similar Papers
No similar papers found.
G
Guozhao Mo
Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences
Wenliang Zhong
Wenliang Zhong
University of Science and Technology Beijing
OptimizationResources Allocation
J
Jiawei Chen
Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences
Xuanang Chen
Xuanang Chen
Institute of Software, Chinese Academy of Sciences
Information RetrievalNatural Language Processing
Yaojie Lu
Yaojie Lu
Institute of Software, Chinese Academy of Sciences
Information ExtractionLarge Language Models
H
Hongyu Lin
Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences
Ben He
Ben He
Professor, University of Chinese Academy of Sciences
Natural Language ProcessingInformation Retrieval
X
Xianpei Han
Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences
Le Sun
Le Sun
Institute of Software, CAS
information_retrievalnatural_language_processing