Reward Is Enough: LLMs Are In-Context Reinforcement Learners

📅 2025-05-21
🏛️ arXiv.org
📈 Citations: 16
Influential: 1
📄 PDF
🤖 AI Summary
This work investigates whether large language models (LLMs) can spontaneously perform reinforcement learning–like optimization during inference—termed *in-context reinforcement learning* (ICRL). To this end, we propose an ICRL framework that requires no parameter updates and instead relies solely on multi-round context augmentation and scalar reward feedback: at each round, the prompt is dynamically reconstructed based on prior model responses and (human- or self-generated) rewards, enabling online improvement of output quality. Our key contribution is the first empirical demonstration that LLMs possess gradient-free, inference-time reward maximization capability, supporting closed-loop self-feedback optimization and extending beyond conventional test-time learning paradigms. Evaluated on Game of 24, creative writing, and ScienceWorld, ICRL significantly outperforms Self-Refine and Reflexion. Notably, it maintains robust performance gains even when using self-assessed rewards.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) is a human-designed framework for solving sequential decision making problems. In this work, we demonstrate that, surprisingly, RL emerges in LLM's (Large Language Model) inference time -- a phenomenon known as in-context RL (ICRL). Specifically, we propose a novel multi-round prompting framework called ICRL prompting. The goal is to prompt the LLM to complete a task. After the LLM generates a response at the current round, we give numerical scalar feedbacks for the response, called the rewards. At the next round, we prompt the LLM again with the same task and a context consisting of all previous responses and rewards. We observe that the quality of the LLM's response increases as the context grows. In other words, the LLM is able to maximize the scalar reward signal in the inference time, just like an RL algorithm. We evaluate ICRL prompting in three benchmarks (Game of 24, creative writing, and ScienceWorld) and demonstrate significant performance improvements over baseline methods such as Self-Refine and Reflexion. Surprisingly, in some experiments the reward signals are generated by the LLM itself, yet performance improvements are still observed from ICRL prompting, offering a promising paradigm for scaling test-time compute.
Problem

Research questions and friction points this paper is trying to address.

Demonstrates RL emerges in LLMs during inference
Introduces ICRL prompting for self-improvement via rewards
Evaluates method on tasks like math and creative writing
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs perform reinforcement learning during inference time
Multi-round prompting with reward feedback for self-improvement
In-context RL prompting enhances performance across diverse tasks
🔎 Similar Papers