🤖 AI Summary
General-purpose agents struggle to efficiently discover and compose tens of thousands of task-specific tools in complex enterprise environments. Method: We introduce TheMCPCompany—a large-scale, multi-service, real-world REST API–driven benchmark for tool calling evaluation, featuring >18,000 production-grade tools and human-annotated ground-truth tool sets. Built upon the Model Context Protocol (MCP), it standardizes tool serving, integrates explicit tool retrieval mechanisms, and supports end-to-end agent evaluation. Results: Agents with explicit retrieval significantly outperform browser-only agents; GPT-5 approaches ground-truth performance under ideal retrieval but exhibits substantial degradation in realistic, complex scenarios. Our analysis reveals fundamental limitations in current models’ semantic understanding of tools, cross-service composition, and long-horizon reasoning. TheMCPCompany establishes a critical evaluation infrastructure and identifies concrete directions for advancing high-precision tool-aware and retrieval-capable models.
📝 Abstract
Since the introduction of the Model Context Protocol (MCP), the number of available tools for Large Language Models (LLMs) has increased significantly. These task-specific tool sets offer an alternative to general-purpose tools such as web browsers, while being easier to develop and maintain than GUIs. However, current general-purpose agents predominantly rely on web browsers for interacting with the environment. Here, we introduce TheMCPCompany, a benchmark for evaluating tool-calling agents on tasks that involve interacting with various real-world services. We use the REST APIs of these services to create MCP servers, which include over 18,000 tools. We also provide manually annotated ground-truth tools for each task. In our experiments, we use the ground truth tools to show the potential of tool-calling agents for both improving performance and reducing costs assuming perfect tool retrieval. Next, we explore agent performance using tool retrieval to study the real-world practicality of tool-based agents. While all models with tool retrieval perform similarly or better than browser-based agents, smaller models cannot take full advantage of the available tools through retrieval. On the other hand, GPT-5's performance with tool retrieval is very close to its performance with ground-truth tools. Overall, our work shows that the most advanced reasoning models are effective at discovering tools in simpler environments, but seriously struggle with navigating complex enterprise environments. TheMCPCompany reveals that navigating tens of thousands of tools and combining them in non-trivial ways to solve complex problems is still a challenging task for current models and requires both better reasoning and better retrieval models.