ToolDreamer: Instilling LLM Reasoning Into Tool Retrievers

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge that large-scale tool collections exceed LLMs’ context windows, preventing full loading of tool descriptions (TDs) and thereby degrading retrieval effectiveness, this paper proposes Hypothetical Tool Description (HTD). HTD leverages LLMs to generate query-aligned, synthetic TDs—replacing original TDs as retrieval cues—and thus delegates part of the reasoning capability to the retrieval stage. The method integrates sparse and dense retrieval techniques and requires no fine-tuning to adapt to diverse retrievers. Evaluated on the ToolRet benchmark, HTD significantly improves recall and accuracy across mainstream retrieval models. Crucially, it demonstrates strong compatibility and generalization—both for models with and without prior tool-related training. HTD establishes a lightweight, general-purpose, and efficient retrieval paradigm for tool-augmented LLMs.

Technology Category

Application Category

📝 Abstract
Tool calling has become increasingly popular for Large Language Models (LLMs). However, for large tool sets, the resulting tokens would exceed the LLM's context window limit, making it impossible to include every tool. Hence, an external retriever is used to provide LLMs with the most relevant tools for a query. Existing retrieval models rank tools based on the similarity between a user query and a tool description (TD). This leads to suboptimal retrieval as user requests are often poorly aligned with the language of TD. To remedy the issue, we propose ToolDreamer, a framework to condition retriever models to fetch tools based on hypothetical (synthetic) TD generated using an LLM, i.e., description of tools that the LLM feels will be potentially useful for the query. The framework enables a more natural alignment between queries and tools within the language space of TD's. We apply ToolDreamer on the ToolRet dataset and show that our method improves the performance of sparse and dense retrievers with and without training, thus showcasing its flexibility. Through our proposed framework, our aim is to offload a portion of the reasoning burden to the retriever so that the LLM may effectively handle a large collection of tools without inundating its context window.
Problem

Research questions and friction points this paper is trying to address.

Addresses tool retrieval misalignment between user queries and tool descriptions
Improves tool selection for LLMs using synthetic descriptions from reasoning
Enables handling large tool sets without exceeding context window limits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses synthetic tool descriptions generated by LLM
Aligns queries with tools in description language space
Improves retriever performance without requiring training
🔎 Similar Papers
No similar papers found.