Experience is the Best Teacher: Motivating Effective Exploration in Reinforcement Learning for LLMs

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that large language models often struggle to discover high-reward behaviors in rule-based reinforcement learning due to exploration being confined within the current policy distribution. To overcome this limitation, the paper proposes HeRL, a novel framework that explicitly leverages hindsight experiences—specifically, suboptimal trajectories along with their evaluation criteria—as contextual guidance to steer policy exploration toward out-of-distribution, high-quality responses. HeRL integrates hindsight experience replay, in-context learning, and gradient-based expectation estimation, and incorporates reward shaping to prioritize outputs with greater potential for improvement. Evaluated across multiple benchmarks, HeRL significantly outperforms existing methods, enhancing both exploration efficiency and reasoning capability, while also demonstrating a test-time self-improvement property through experience-guided adaptation.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning (RL) with rubric-based rewards has recently shown remarkable progress in enhancing general reasoning capabilities of Large Language Models (LLMs), yet still suffers from ineffective exploration confined to curent policy distribution. In fact, RL optimization can be viewed as steering the policy toward an ideal distribution that maximizes the rewards, while effective exploration should align efforts with desired target. Leveraging this insight, we propose HeRL, a Hindsight experience guided Reinforcement Learning framework to bootstrap effective exploration by explicitly telling LLMs the desired behaviors specified in rewards. Concretely, HeRL treats failed trajectories along with their unmet rubrics as hindsight experience, which serves as in-context guidance for the policy to explore desired responses beyond its current distribution. Additionally, we introduce a bonus reward to incentivize responses with greater potential for improvement under such guidance. HeRL facilitates effective learning from desired high quality samples without repeated trial-and-error from scratch, yielding a more accurate estimation of the expected gradient theoretically. Extensive experiments across various benchmarks demonstrate that HeRL achieves superior performance gains over baselines, and can further benefit from experience guided self-improvement at test time. Our code is available at https://github.com/sikelifei/HeRL.
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning
Large Language Models
Exploration
Reward Shaping
Hindsight Experience
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hindsight Experience
Reinforcement Learning
Large Language Models
Reward-Guided Exploration
In-Context Learning
W
Wenjian Zhang
Dalian University of Technology
K
Kongcheng Zhang
Zhejiang University
J
Jiaxin Qi
Chinese Academy of Sciences
B
Baisheng Lai
Chinese Academy of Sciences
Jianqiang Huang
Jianqiang Huang
Nanyang Technological University, Chinese Academy of Sciences
Compter VisionMachine LearningCasuality