Reinforce-Ada: An Adaptive Sampling Framework for Reinforce-Style LLM Training

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address unstable gradient estimation and suboptimal inference performance in reinforcement learning (RL) training of large language models (LLMs) caused by fixed uniform prompt sampling, this paper proposes an adaptive sampling framework. The method integrates online RL, variance-aware sampling, dynamic budget allocation, and adaptive data scheduling. Key contributions include: (1) an online continual elimination mechanism that alternates between reward estimation and sampling, dynamically terminating sampling for prompts with low information gain; and (2) a reward diversity grouping strategy coupled with a global-statistics-based advantage baseline, significantly improving policy update stability. Experiments across multiple LLMs and reasoning benchmarks demonstrate accelerated convergence and superior final performance. Under balanced sampling settings, the framework outperforms GRPO, achieving both higher sample efficiency and stronger generalization.

Technology Category

Application Category

📝 Abstract
Reinforcement learning applied to large language models (LLMs) for reasoning tasks is often bottlenecked by unstable gradient estimates due to fixed and uniform sampling of responses across prompts. Prior work such as GVM-RAFT addresses this by dynamically allocating inference budget per prompt to minimize stochastic gradient variance under a budget constraint. Inspired by this insight, we propose Reinforce-Ada, an adaptive sampling framework for online RL post-training of LLMs that continuously reallocates sampling effort to the prompts with the greatest uncertainty or learning potential. Unlike conventional two-stage allocation methods, Reinforce-Ada interleaves estimation and sampling in an online successive elimination process, and automatically stops sampling for a prompt once sufficient signal is collected. To stabilize updates, we form fixed-size groups with enforced reward diversity and compute advantage baselines using global statistics aggregated over the adaptive sampling phase. Empirical results across multiple model architectures and reasoning benchmarks show that Reinforce-Ada accelerates convergence and improves final performance compared to GRPO, especially when using the balanced sampling variant. Our work highlights the central role of variance-aware, adaptive data curation in enabling efficient and reliable reinforcement learning for reasoning-capable LLMs. Code is available at https://github.com/RLHFlow/Reinforce-Ada.
Problem

Research questions and friction points this paper is trying to address.

Adaptive sampling reduces gradient variance in RL training
Dynamic resource allocation targets high-uncertainty prompts
Online successive elimination optimizes sampling efficiency automatically
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive sampling framework for online RL post-training
Online successive elimination process for prompt sampling
Fixed-size groups with enforced reward diversity
🔎 Similar Papers
No similar papers found.