🤖 AI Summary
Existing LLM-based agents rely heavily on open-web search, suffering from high information noise and insufficient domain-specific knowledge; although the Model Context Protocol (MCP) enables integration of domain-specific tools, its synergy with general-purpose search remains unassessed. Method: We introduce InfoMosaic-Bench—the first benchmark for evaluating tool-augmented agents in multi-source information acquisition—spanning six domains (e.g., medicine, finance) and featuring synthetic task flows (InfoMosaic-Flow) designed with cross-source dependencies and shortcut resistance. Leveraging MCP to integrate domain tools and a verification-driven pipeline for task generation, we enable rigorous evaluation of multi-source fusion reasoning. Contribution/Results: Experiments reveal that GPT-5 achieves only 38.2% accuracy; notably, 22.4% of failures stem from incorrect tool selection or inconsistent tool invocation—exposing fundamental weaknesses in multi-source coordination and tool operation within current agents.
📝 Abstract
Information seeking is a fundamental requirement for humans. However, existing LLM agents rely heavily on open-web search, which exposes two fundamental weaknesses: online content is noisy and unreliable, and many real-world tasks require precise, domain-specific knowledge unavailable from the web. The emergence of the Model Context Protocol (MCP) now allows agents to interface with thousands of specialized tools, seemingly resolving this limitation. Yet it remains unclear whether agents can effectively leverage such tools -- and more importantly, whether they can integrate them with general-purpose search to solve complex tasks. Therefore, we introduce InfoMosaic-Bench, the first benchmark dedicated to multi-source information seeking in tool-augmented agents. Covering six representative domains (medicine, finance, maps, video, web, and multi-domain integration), InfoMosaic-Bench requires agents to combine general-purpose search with domain-specific tools. Tasks are synthesized with InfoMosaic-Flow, a scalable pipeline that grounds task conditions in verified tool outputs, enforces cross-source dependencies, and filters out shortcut cases solvable by trivial lookup. This design guarantees both reliability and non-triviality. Experiments with 14 state-of-the-art LLM agents reveal three findings: (i) web information alone is insufficient, with GPT-5 achieving only 38.2% accuracy and 67.5% pass rate; (ii) domain tools provide selective but inconsistent benefits, improving some domains while degrading others; and (iii) 22.4% of failures arise from incorrect tool usage or selection, highlighting that current LLMs still struggle with even basic tool handling.