OSWorld-MCP: Benchmarking MCP Tool Invocation In Computer-Use Agents

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks predominantly evaluate multimodal agents’ GUI interaction capabilities while neglecting their ability to invoke external tools via the Model Context Protocol (MCP), leading to biased and incomplete assessments. To address this gap, we propose OSWorld-MCP—the first fair, end-to-end benchmark operating in realistic OS environments that jointly evaluates both GUI navigation and MCP-based tool invocation. We curate 158 high-quality, human-verified general-purpose tools, develop an automated code-generation pipeline for task execution, and integrate state-of-the-art multimodal large language models for comprehensive evaluation. Experimental results demonstrate that MCP integration substantially improves task success rates (e.g., OpenAI o3 increases from 8.3% to 20.4%), yet even the best-performing model achieves only a 36.3% tool-call success rate—highlighting significant unresolved challenges. This work establishes the first systematic evaluation framework for MCP-driven multimodal agents, filling a critical gap in agent benchmarking.

Technology Category

Application Category

📝 Abstract
With advances in decision-making and reasoning capabilities, multimodal agents show strong potential in computer application scenarios. Past evaluations have mainly assessed GUI interaction skills, while tool invocation abilities, such as those enabled by the Model Context Protocol (MCP), have been largely overlooked. Comparing agents with integrated tool invocation to those evaluated only on GUI interaction is inherently unfair. We present OSWorld-MCP, the first comprehensive and fair benchmark for assessing computer-use agents'tool invocation, GUI operation, and decision-making abilities in a real-world environment. We design a novel automated code-generation pipeline to create tools and combine them with a curated selection from existing tools. Rigorous manual validation yields 158 high-quality tools (covering 7 common applications), each verified for correct functionality, practical applicability, and versatility. Extensive evaluations of state-of-the-art multimodal agents on OSWorld-MCP show that MCP tools generally improve task success rates (e.g., from 8.3% to 20.4% for OpenAI o3 at 15 steps, from 40.1% to 43.3% for Claude 4 Sonnet at 50 steps), underscoring the importance of assessing tool invocation capabilities. However, even the strongest models have relatively low tool invocation rates, Only 36.3%, indicating room for improvement and highlighting the benchmark's challenge. By explicitly measuring MCP tool usage skills, OSWorld-MCP deepens understanding of multimodal agents and sets a new standard for evaluating performance in complex, tool-assisted environments. Our code, environment, and data are publicly available at https://osworld-mcp.github.io.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multimodal agents' tool invocation abilities in computer applications
Assessing GUI operation and decision-making in real-world environments
Measuring MCP tool usage skills for complex task performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated code-generation pipeline for tool creation
Combines curated existing tools with novel ones
Rigorous manual validation ensures high-quality tools
🔎 Similar Papers
No similar papers found.
H
Hongrui Jia
Peking University
J
Jitong Liao
Tongyi Lab, Alibaba Group
X
Xi Zhang
Tongyi Lab, Alibaba Group
H
Haiyang Xu
Tongyi Lab, Alibaba Group
Tianbao Xie
Tianbao Xie
University of Hong Kong
Artificial IntelligenceDeep LearningNatural Language Processing
Chaoya Jiang
Chaoya Jiang
Shandong University
Multimodal Large Language Model
M
Ming Yan
Tongyi Lab, Alibaba Group
Si Liu
Si Liu
Fred Hutchinson Cancer Center
GenomicsBiostatisticsAnomaly DetectionOpen Category Detection
W
Wei Ye
Peking University
F
Fei Huang
Tongyi Lab, Alibaba Group