MCP-RADAR: A Multi-Dimensional Benchmark for Evaluating Tool Use Capabilities in Large Language Models

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation methodologies inadequately assess large language models’ (LLMs) tool utilization capabilities within the Model Context Protocol (MCP) framework. Method: We introduce the first MCP-oriented, multidimensional automated benchmark, featuring five quantitative metrics—answer accuracy, tool selection efficiency, resource consumption, parameter construction precision, and execution speed—to overcome the limitations of binary or human-centric evaluation. Our objective, pipeline-based assessment covers software engineering, mathematical reasoning, and general problem solving, enabling cross-model and cross-tool-framework comparisons. Contribution/Results: We comprehensively evaluate leading commercial and open-source LLMs, revealing systematic differences and trade-offs in tool discovery, invocation, and orchestration. All code, configurations, and datasets are publicly released to advance standardization of the LLM-agent tool ecosystem.

Technology Category

Application Category

📝 Abstract
As Large Language Models (LLMs) evolve from passive text generators to active reasoning agents capable of tool interaction, the Model Context Protocol (MCP) has emerged as a standardized framework for dynamic tool discovery and orchestration. Despite widespread industry adoption, existing evaluation methodologies fail to adequately assess tool utilization capabilities within this new paradigm. This paper introduces MCP-RADAR, the first comprehensive benchmark specifically designed to evaluate LLM performance in the MCP framework through a novel five-dimensional approach measuring: answer accuracy, tool selection efficiency, computational resource efficiency, parameter construction accuracy, and execution speed. Unlike conventional benchmarks that rely on subjective human evaluations or binary success metrics, MCP-RADAR employs objective, quantifiable measurements across multiple task domains including software engineering, mathematical reasoning, and general problem-solving. Our evaluations of leading commercial and open-source LLMs reveal distinctive capability profiles with significant trade-offs between accuracy, efficiency, and speed, challenging traditional single-metric performance rankings. Besides, we provide valuable guidance for developers to optimize their tools for maximum model compatibility and effectiveness. While focused on MCP due to its standardized approach, our methodology remains applicable across all LLM agent tool integration frameworks, providing valuable insights for both LLM developers and tool creators to optimize the entire LLM-tool interaction ecosystem. The implementation, configurations, and datasets used in our evaluation are publicly available at https://anonymous.4open.science/r/MCPRadar-B143.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM tool use in MCP framework comprehensively
Assessing five dimensions: accuracy, efficiency, speed, and more
Providing guidance for optimizing LLM-tool interaction ecosystem
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces MCP-RADAR for multi-dimensional LLM evaluation
Measures accuracy, efficiency, speed via objective metrics
Applicable across all LLM-tool interaction frameworks
🔎 Similar Papers
Xuanqi Gao
Xuanqi Gao
Xi'an Jiaotong University
Software EngineeringSoftware SecurityDeep Learning
S
Siyi Xie
Xi’an Jiaotong University, Xi’an, China
Juan Zhai
Juan Zhai
University of Massachusetts, Amherst
software text analyticssoftware reliabilitydeep learning
S
Shqing Ma
University of Massachusetts at Amherst, Amherst, USA
C
Chao Shen
Xi’an Jiaotong University, Xi’an, China