🤖 AI Summary
This work addresses the problem of over-searching in retrieval-augmented large language models, which often invoke retrieval tools unnecessarily—leading to computational waste and hallucinations. The study systematically reveals how this issue intensifies in complex reasoning, multi-turn dialogues, and noisy retrieval settings. To quantify efficiency and correctness trade-offs, the authors introduce a novel metric, Tokens Per Correctness (TPC), and present OverSearchQA, a new benchmark dataset. Through multi-dimensional analyses of model abstention behavior and retrieval evidence composition, they demonstrate that incorporating negative evidence significantly enhances the model’s ability to refrain from unnecessary searches. Combining query-level and retrieval-level mitigation strategies, the proposed approach effectively curbs over-searching. The OverSearchQA dataset is publicly released to foster research on efficient retrieval-augmented generation.
📝 Abstract
Search-augmented large language models (LLMs) excel at knowledge-intensive tasks by integrating external retrieval. However, they often over-search -- unnecessarily invoking search tool even when it does not improve response quality, which leads to computational inefficiency and hallucinations by incorporating irrelevant context. In this work, we conduct a systematic evaluation of over-searching across multiple dimensions, including query types, model categories, retrieval conditions, and multi-turn conversations. Our finding shows: (i) search generally improves answer accuracy on answerable queries but harms abstention on unanswerable ones; (ii) over-searching is more pronounced in complex reasoning models and deep research systems, is exacerbated by noisy retrieval, and compounds across turns in multi-turn conversations; and (iii) the composition of retrieved evidence is crucial, as the presence of negative evidence improves abstention. To quantify over-searching, we introduce Tokens Per Correctness (TPC), an evaluation metric that captures the performance-cost trade-off for search-augmented LLMs. Lastly, we investigate mitigation approaches at both the query and retrieval levels and release the OverSearchQA to foster continued research into efficient search-augmented LLMs.