OTC: Optimal Tool Calls via Reinforcement Learning

📅 2025-04-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address redundant, inefficient, and costly tool invocation in Tool-Integrated Reasoning (TIR), this paper proposes the first reinforcement learning–based framework—employing PPO and GRPO—for optimal tool call optimization. It innovatively models tool usage efficiency explicitly within TIR and introduces a composite reward function jointly optimizing answer correctness and invocation cost, thereby maximizing tool productivity. By fine-tuning policy models on Qwen-2.5 and Qwen-Math, experiments demonstrate up to a 73.1% reduction in tool calls, a 229.4% improvement in tool productivity, and stable answer accuracy. This work is the first to incorporate tool economics—i.e., cost-awareness—into the TIR optimization objective, establishing a novel paradigm for efficient, low-cost tool-augmented reasoning.

Technology Category

Application Category

📝 Abstract
Tool-integrated reasoning (TIR) augments large language models (LLMs) with the ability to invoke external tools, such as search engines and code interpreters, to solve tasks beyond the capabilities of language-only reasoning. While reinforcement learning (RL) has shown promise in improving TIR by optimizing final answer correctness, existing approaches often overlook the efficiency and cost associated with tool usage. This can lead to suboptimal behavior, including excessive tool calls that increase computational and financial overhead, or insufficient tool use that compromises answer quality. In this work, we propose Optimal Tool Call-controlled Policy Optimization (OTC-PO), a simple yet effective RL-based framework that encourages models to produce accurate answers with minimal tool calls. Our method introduces a tool-integrated reward that jointly considers correctness and tool efficiency, promoting high tool productivity. We instantiate this framework within both Proximal Policy Optimization (PPO) and Group Relative Preference Optimization (GRPO), resulting in OTC-PPO and OTC-GRPO. Experiments with Qwen-2.5 and Qwen-Math across multiple QA benchmarks show that our approach reduces tool calls by up to 73.1% and improves tool productivity by up to 229.4%, while maintaining comparable answer accuracy. To the best of our knowledge, this is the first RL-based framework that explicitly optimizes tool-use efficiency in TIR.
Problem

Research questions and friction points this paper is trying to address.

Optimizes tool usage efficiency in LLMs via reinforcement learning
Reduces excessive tool calls to lower computational costs
Balances answer accuracy and tool productivity in TIR
Innovation

Methods, ideas, or system contributions that make the work stand out.

RL-based framework optimizes tool-use efficiency
Joint reward for correctness and tool efficiency
Reduces tool calls while maintaining accuracy
🔎 Similar Papers
No similar papers found.