MCPEval: Automatic MCP-based Deep Evaluation for AI Agent Models

๐Ÿ“… 2025-07-17
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Current LLM agent evaluation relies on static benchmarks and manual annotation, suffering from poor scalability and low efficiency. To address this, we propose the first open-source, automated, deep-evaluation framework for LLM agents based on the Model Context Protocol (MCP). Our method integrates MCP-compliant tool orchestration, adaptive task modeling, and a standardized multi-dimensional metric suite to enable fully automated, reproducible, end-to-end assessment across diverse domainsโ€”without human intervention. The framework dynamically generates realistic, domain-specific tasks, natively incorporates external tools, and supports fine-grained performance analysis along dimensions including correctness, efficiency, robustness, and tool utilization. We validate it across five real-world scenarios, demonstrating substantial improvements in both evaluation throughput and analytical depth. The framework is publicly released as an open-source project, establishing a standardized, scalable infrastructure for rigorous, automated AI agent evaluation.

Technology Category

Application Category

๐Ÿ“ Abstract
The rapid rise of Large Language Models (LLMs)-based intelligent agents underscores the need for robust, scalable evaluation frameworks. Existing methods rely on static benchmarks and labor-intensive data collection, limiting practical assessment. We introduce oursystemname, an open-source Model Context Protocol (MCP)-based framework that automates end-to-end task generation and deep evaluation of LLM agents across diverse domains. MCPEval standardizes metrics, seamlessly integrates with native agent tools, and eliminates manual effort in building evaluation pipelines. Empirical results across five real-world domains show its effectiveness in revealing nuanced, domain-specific performance. We publicly release MCPEval https://github.com/SalesforceAIResearch/MCPEval to promote reproducible and standardized LLM agent evaluation.
Problem

Research questions and friction points this paper is trying to address.

Automates end-to-end task generation for LLM agents
Standardizes metrics for diverse domain evaluations
Eliminates manual effort in evaluation pipelines
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automates end-to-end task generation and evaluation
Standardizes metrics and integrates with agent tools
Eliminates manual effort in evaluation pipelines