Tool Learning in the Wild: Empowering Language Models as Automatic Tool Agents

📅 2024-05-26
📈 Citations: 12
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) struggle with autonomous understanding and invocation of external tools. Method: This paper proposes AutoTools, an end-to-end automatic tool learning framework, and AutoTools-learning, a corresponding training paradigm. It eliminates reliance on human demonstrations, special tokens, or hard-coded integrations, instead leveraging LLM-driven tool documentation parsing, function synthesis, and program generation to achieve zero-shot tool adaptation and dynamic multi-tool orchestration. Contribution/Results: Training proceeds in three synthetic-data stages—documentation understanding, relevance learning, and function programming—yielding substantial gains on a newly constructed high-difficulty benchmark. With only 34K synthetic samples, open-source small models achieve up to a 41.2% absolute improvement in tool-call accuracy, marking the first demonstration of efficient generalization for compact models on complex tool-use tasks.

Technology Category

Application Category

📝 Abstract
Augmenting large language models (LLMs) with external tools has emerged as a promising approach to extend their utility, enabling them to solve practical tasks. Previous methods manually parse tool documentation and create in-context demonstrations, transforming tools into structured formats for LLMs to use in their step-by-step reasoning. However, this manual process requires domain expertise and struggles to scale to large toolsets. Additionally, these methods rely heavily on ad-hoc inference techniques or special tokens to integrate free-form LLM generation with tool-calling actions, limiting the LLM's flexibility in handling diverse tool specifications and integrating multiple tools. In this work, we propose AutoTools, a framework that enables LLMs to automate the tool-use workflow. Specifically, the LLM automatically transforms tool documentation into callable functions, verifying syntax and runtime correctness. Then, the LLM integrates these functions into executable programs to solve practical tasks, flexibly grounding tool-use actions into its reasoning processes. Extensive experiments on existing and newly collected, more challenging benchmarks illustrate the superiority of our framework. Inspired by these promising results, we further investigate how to improve the expertise of LLMs, especially open-source LLMs with fewer parameters, within AutoTools. Thus, we propose the AutoTools-learning approach, training the LLMs with three learning tasks on 34k instances of high-quality synthetic data, including documentation understanding, relevance learning, and function programming. Fine-grained results validate the effectiveness of our overall training approach and each individual task. Our methods are an important step towards the use of LLMs for solving real-world tasks with external tools.
Problem

Research questions and friction points this paper is trying to address.

Automating tool documentation parsing for scalable LLM tool integration
Enhancing LLM flexibility in handling diverse tool specifications
Improving open-source LLM expertise for real-world tool-based tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatically transforms tool documentation into callable functions
Integrates functions into executable programs for task solving
Trains LLMs with synthetic data on three learning tasks
🔎 Similar Papers
No similar papers found.