🤖 AI Summary
To address the challenge that large-scale tool collections exceed LLMs’ context windows, preventing full loading of tool descriptions (TDs) and thereby degrading retrieval effectiveness, this paper proposes Hypothetical Tool Description (HTD). HTD leverages LLMs to generate query-aligned, synthetic TDs—replacing original TDs as retrieval cues—and thus delegates part of the reasoning capability to the retrieval stage. The method integrates sparse and dense retrieval techniques and requires no fine-tuning to adapt to diverse retrievers. Evaluated on the ToolRet benchmark, HTD significantly improves recall and accuracy across mainstream retrieval models. Crucially, it demonstrates strong compatibility and generalization—both for models with and without prior tool-related training. HTD establishes a lightweight, general-purpose, and efficient retrieval paradigm for tool-augmented LLMs.
📝 Abstract
Tool calling has become increasingly popular for Large Language Models (LLMs). However, for large tool sets, the resulting tokens would exceed the LLM's context window limit, making it impossible to include every tool. Hence, an external retriever is used to provide LLMs with the most relevant tools for a query. Existing retrieval models rank tools based on the similarity between a user query and a tool description (TD). This leads to suboptimal retrieval as user requests are often poorly aligned with the language of TD. To remedy the issue, we propose ToolDreamer, a framework to condition retriever models to fetch tools based on hypothetical (synthetic) TD generated using an LLM, i.e., description of tools that the LLM feels will be potentially useful for the query. The framework enables a more natural alignment between queries and tools within the language space of TD's. We apply ToolDreamer on the ToolRet dataset and show that our method improves the performance of sparse and dense retrievers with and without training, thus showcasing its flexibility. Through our proposed framework, our aim is to offload a portion of the reasoning burden to the retriever so that the LLM may effectively handle a large collection of tools without inundating its context window.