๐ค AI Summary
Current LLM agent evaluation relies on static benchmarks and manual annotation, suffering from poor scalability and low efficiency. To address this, we propose the first open-source, automated, deep-evaluation framework for LLM agents based on the Model Context Protocol (MCP). Our method integrates MCP-compliant tool orchestration, adaptive task modeling, and a standardized multi-dimensional metric suite to enable fully automated, reproducible, end-to-end assessment across diverse domainsโwithout human intervention. The framework dynamically generates realistic, domain-specific tasks, natively incorporates external tools, and supports fine-grained performance analysis along dimensions including correctness, efficiency, robustness, and tool utilization. We validate it across five real-world scenarios, demonstrating substantial improvements in both evaluation throughput and analytical depth. The framework is publicly released as an open-source project, establishing a standardized, scalable infrastructure for rigorous, automated AI agent evaluation.
๐ Abstract
The rapid rise of Large Language Models (LLMs)-based intelligent agents underscores the need for robust, scalable evaluation frameworks. Existing methods rely on static benchmarks and labor-intensive data collection, limiting practical assessment. We introduce oursystemname, an open-source Model Context Protocol (MCP)-based framework that automates end-to-end task generation and deep evaluation of LLM agents across diverse domains. MCPEval standardizes metrics, seamlessly integrates with native agent tools, and eliminates manual effort in building evaluation pipelines. Empirical results across five real-world domains show its effectiveness in revealing nuanced, domain-specific performance. We publicly release MCPEval https://github.com/SalesforceAIResearch/MCPEval to promote reproducible and standardized LLM agent evaluation.