StepTool: Enhancing Multi-Step Tool Usage in LLMs through Step-Grained Reinforcement Learning

πŸ“… 2024-10-10
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing LLM-based tool learning methods predominantly formulate multi-step tool invocation as a text generation task, relying on supervised fine-tuning and thus struggling with the dynamic decision-making complexity inherent in sequential tool use. This work proposes the first step-wise reinforcement learning framework that explicitly models tool calling as a serialized decision process. We introduce a step-level reward shaping mechanism that separately quantifies the success and task-relevant contribution of each individual tool call. Further, we integrate policy gradient optimization with LLM–tool interface alignment to enable fine-grained policy updates. Evaluated on multi-step tool-use benchmarks, our approach achieves substantial improvements: +18.7% in task completion rate and +22.3% in tool-call accuracy, while significantly enhancing cross-step decision robustness. This framework establishes a novel paradigm for advancing LLM-based embodied intelligence and complex, multi-stage task execution.

Technology Category

Application Category

πŸ“ Abstract
Despite powerful text generation capabilities, large language models (LLMs) still need to learn how to utilize external tools to solve complex tasks, a process known as tool learning. Existing methods primarily rely on supervised fine-tuning to enhance tool-use capabilities, treating tool learning as a text-generation task while overlooking the decision-making complexities inherent in multi-step contexts. In this work, we propose modeling tool learning as a dynamic decision-making task and introduce StepTool, a novel step-grained reinforcement learning framework that enhances the multi-step tool use capabilities of LLMs. StepTool consists of two main components: Step-grained Reward Shaping, which assigns rewards at each tool interaction based on the success of tool invocation and its contribution to the task; and Step-grained Optimization, which uses policy gradient methods to optimize the model in a multi-step manner. Experimental results demonstrate that StepTool significantly outperforms existing methods in multi-step, tool-based tasks, offering a robust solution for tool learning.
Problem

Research questions and friction points this paper is trying to address.

Enhancing multi-step tool usage in LLMs
Addressing decision-making complexities in tool learning
Improving tool-use capabilities with step-grained reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Step-grained reinforcement learning framework
Step-grained Reward Shaping
Step-grained Optimization
πŸ”Ž Similar Papers
Y
Yuanqing Yu
Department of Computer Science and Technology, Tsinghua University
Z
Zhefan Wang
Department of Computer Science and Technology, Tsinghua University
Weizhi Ma
Weizhi Ma
Tsinghua University
LLM and AgentsRecommendationAI for Healthcare
Z
Zhicheng Guo
Department of Computer Science and Technology, Tsinghua University
Jingtao Zhan
Jingtao Zhan
Tsinghua University
Information RetrievalNatural Language ProcessingAI
S
Shuai Wang
Huawei Noah’s Ark Lab
Chuhan Wu
Chuhan Wu
WeChat AI, Tencent
Foundation ModelPretrainingPost TrainingLLM Agent
Z
Zhiqiang Guo
Department of Computer Science and Technology, Tsinghua University
M
Min Zhang
Department of Computer Science and Technology, Tsinghua University