In-Context Reinforcement Learning for Tool Use in Large Language Models

πŸ“… 2026-03-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes a purely reinforcement learning (RL)–driven training framework for large language models to overcome their limitations in complex tasks stemming from reliance on internal knowledge and ineffective external tool invocation. Departing from conventional supervised fine-tuning (SFT), the approach introduces a small number of in-context examples during rollouts to guide tool usage and employs a progressive example annealing strategy, ultimately enabling zero-shot tool utilization. By integrating contextual reinforcement learning with standard RL algorithms, the method achieves state-of-the-art performance across multiple reasoning and tool-use benchmarks. Notably, it demonstrates for the first time that highly effective and scalable tool-calling capabilities can be trained without any supervised fine-tuning.

Technology Category

Application Category

πŸ“ Abstract
While large language models (LLMs) exhibit strong reasoning abilities, their performance on complex tasks is often constrained by the limitations of their internal knowledge. A compelling approach to overcome this challenge is to augment these models with external tools -- such as Python interpreters for mathematical computations or search engines for retrieving factual information. However, enabling models to use these tools effectively remains a significant challenge. Existing methods typically rely on cold-start pipelines that begin with supervised fine-tuning (SFT), followed by reinforcement learning (RL). These approaches often require substantial amounts of labeled data for SFT, which is expensive to annotate or synthesize. In this work, we propose In-Context Reinforcement Learning (ICRL), an RL-only framework that eliminates the need for SFT by leveraging few-shot prompting during the rollout stage of RL. Specifically, ICRL introduces in-context examples within the rollout prompts to teach the model how to invoke external tools. Furthermore, as training progresses, the number of in-context examples is gradually reduced, eventually reaching a zero-shot setting where the model learns to call tools independently. We conduct extensive experiments across a range of reasoning and tool-use benchmarks. Results show that ICRL achieves state-of-the-art performance, demonstrating its effectiveness as a scalable, data-efficient alternative to traditional SFT-based pipelines.
Problem

Research questions and friction points this paper is trying to address.

tool use
large language models
reinforcement learning
supervised fine-tuning
in-context learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

In-Context Learning
Reinforcement Learning
Tool Use
Large Language Models
Few-Shot Prompting