🤖 AI Summary
This work investigates the search invocation calibration capability of commercial large language models (LLMs) when equipped with built-in web search—specifically, when to search, whether search is effectively triggered, and whether it improves factual accuracy. We introduce the first black-box evaluation benchmark that requires no access to model internals, distinguishing between static-knowledge and time-sensitive questions. Our two-stage test set comprises 783 static and 288 time-sensitive questions, each annotated by human experts for search necessity. Results show that while web search improves overall accuracy, commercial LLMs exhibit systematic deficiencies: overconfidence in incorrect answers, critical failure to retrieve relevant information, and poor recovery after initial query failures. The core contribution is the first decoupled evaluation of search necessity versus search effectiveness in production LLMs, revealing their suitability as low-latency verification layers—not as reliable reasoning components.
📝 Abstract
Modern large language models integrate web search to provide real-time answers, yet it remains unclear whether they are efficiently calibrated to use search when it is actually needed. We introduce a benchmark evaluating both the necessity and effectiveness of web access across commercial models with no access to internal states or parameters. The dataset includes a static split of 783 temporally anchored questions answerable from pre-cutoff knowledge, aimed at testing whether models invoke search based on low internal confidence, and a dynamic split of 288 post-cutoff queries designed to test whether models recognise when search is required and retrieve updated information. Web access substantially improves static accuracy for GPT-5-mini and Claude Haiku 4.5, though confidence calibration worsens. On dynamic queries, both models frequently invoke search yet remain below 70 percent accuracy due to weak query formulation. Costs per accuracy-improving call remain low, but returns diminish once initial retrieval fails. Selective invocation helps, but models become overconfident and inconsistent after search. Overall, built-in web search meaningfully improves factual accuracy and can be invoked selectively, yet models remain overconfident, skip retrieval when it is essential, and falter once initial search queries underperform. Taken together, internal web search works better as a good low-latency verification layer than a reliable analytical tool, with clear room for improvement.