From Exploration to Mastery: Enabling LLMs to Master Tools via Self-Driven Interactions

📅 2024-10-10
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Existing tool documentation is often inaccurate or incomplete, hindering large language models (LLMs) from effectively invoking external tools. To address this, we propose DRAFT, a framework that enables dynamic optimization of tool documentation through a self-driven, closed-loop interaction between LLMs and tools. Its core is a three-stage self-iterative mechanism: experience collection → experience learning → documentation rewriting, augmented by diversity-aware exploration and tool-adaptive early stopping. The method integrates LLM self-feedback analysis, trial-and-error-driven rewriting, and tool-aware sampling. On multiple benchmarks, DRAFT significantly improves tool-call accuracy; the refined documentation exhibits strong cross-model generalization, more efficient iteration, and robustness against overfitting. This work constitutes the first approach enabling LLM-led autonomous evolution of tool documentation, establishing a high-quality knowledge foundation for tool-augmented AI systems.

Technology Category

Application Category

📝 Abstract
Tool learning enables Large Language Models (LLMs) to interact with external environments by invoking tools, serving as an effective strategy to mitigate the limitations inherent in their pre-training data. In this process, tool documentation plays a crucial role by providing usage instructions for LLMs, thereby facilitating effective tool utilization. This paper concentrates on the critical challenge of bridging the comprehension gap between LLMs and external tools due to the inadequacies and inaccuracies inherent in existing human-centric tool documentation. We propose a novel framework, DRAFT, aimed at Dynamically Refining tool documentation through the Analysis of Feedback and Trials emanating from LLMs' interactions with external tools. This methodology pivots on an innovative trial-and-error approach, consisting of three distinct learning phases: experience gathering, learning from experience, and documentation rewriting, to iteratively enhance the tool documentation. This process is further optimized by implementing a diversity-promoting exploration strategy to ensure explorative diversity and a tool-adaptive termination mechanism to prevent overfitting while enhancing efficiency. Extensive experiments on multiple datasets demonstrate that DRAFT's iterative, feedback-based refinement significantly ameliorates documentation quality, fostering a deeper comprehension and more effective utilization of tools by LLMs. Notably, our analysis reveals that the tool documentation refined via our approach demonstrates robust cross-model generalization capabilities.
Problem

Research questions and friction points this paper is trying to address.

Bridging comprehension gap in tool documentation
Enhancing LLMs' tool utilization effectiveness
Improving documentation quality via iterative refinement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-driven tool interaction
Dynamic documentation refinement
Diversity-promoting exploration strategy
🔎 Similar Papers