π€ AI Summary
Existing LLM-based tool learning methods predominantly formulate multi-step tool invocation as a text generation task, relying on supervised fine-tuning and thus struggling with the dynamic decision-making complexity inherent in sequential tool use. This work proposes the first step-wise reinforcement learning framework that explicitly models tool calling as a serialized decision process. We introduce a step-level reward shaping mechanism that separately quantifies the success and task-relevant contribution of each individual tool call. Further, we integrate policy gradient optimization with LLMβtool interface alignment to enable fine-grained policy updates. Evaluated on multi-step tool-use benchmarks, our approach achieves substantial improvements: +18.7% in task completion rate and +22.3% in tool-call accuracy, while significantly enhancing cross-step decision robustness. This framework establishes a novel paradigm for advancing LLM-based embodied intelligence and complex, multi-stage task execution.
π Abstract
Despite powerful text generation capabilities, large language models (LLMs) still need to learn how to utilize external tools to solve complex tasks, a process known as tool learning. Existing methods primarily rely on supervised fine-tuning to enhance tool-use capabilities, treating tool learning as a text-generation task while overlooking the decision-making complexities inherent in multi-step contexts. In this work, we propose modeling tool learning as a dynamic decision-making task and introduce StepTool, a novel step-grained reinforcement learning framework that enhances the multi-step tool use capabilities of LLMs. StepTool consists of two main components: Step-grained Reward Shaping, which assigns rewards at each tool interaction based on the success of tool invocation and its contribution to the task; and Step-grained Optimization, which uses policy gradient methods to optimize the model in a multi-step manner. Experimental results demonstrate that StepTool significantly outperforms existing methods in multi-step, tool-based tasks, offering a robust solution for tool learning.