Optimizing Chain-of-Thought Reasoners via Gradient Variance Minimization in Rejection Sampling and RL

๐Ÿ“… 2025-05-05
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address high gradient estimation variance and slow convergence in chain-of-thought (CoT) reasoning caused by static sampling, this paper proposes the first prompt-level dynamic sample allocation framework. Our method integrates rejection sampling, reinforcement learning (RAFT/GRPO), and stochastic optimization theory to dynamically allocate computational resources per prompt based on real-time acceptance rates and gradient normsโ€”minimizing gradient variance under a fixed budget and providing theoretical convergence acceleration guarantees. Crucially, we formulate CoT training as a latent-variable optimization problem, abandoning uniform inference budgets in favor of difficulty-aware adaptive sampling. Empirical evaluation on mathematical reasoning tasks demonstrates 2โ€“4ร— faster training and significant accuracy gains. The framework is plug-and-play, fully compatible with diverse RL-based CoT algorithms, and exhibits strong generalization across models and tasks.

Technology Category

Application Category

๐Ÿ“ Abstract
Chain-of-thought (CoT) reasoning in large language models (LLMs) can be formalized as a latent variable problem, where the model needs to generate intermediate reasoning steps. While prior approaches such as iterative reward-ranked fine-tuning (RAFT) have relied on such formulations, they typically apply uniform inference budgets across prompts, which fails to account for variability in difficulty and convergence behavior. This work identifies the main bottleneck in CoT training as inefficient stochastic gradient estimation due to static sampling strategies. We propose GVM-RAFT, a prompt-specific Dynamic Sample Allocation Strategy designed to minimize stochastic gradient variance under a computational budget constraint. The method dynamically allocates computational resources by monitoring prompt acceptance rates and stochastic gradient norms, ensuring that the resulting gradient variance is minimized. Our theoretical analysis shows that the proposed dynamic sampling strategy leads to accelerated convergence guarantees under suitable conditions. Experiments on mathematical reasoning show that GVM-RAFT achieves a 2-4x speedup and considerable accuracy improvements over vanilla RAFT. The proposed dynamic sampling strategy is general and can be incorporated into other reinforcement learning algorithms, such as GRPO, leading to similar improvements in convergence and test accuracy. Our code is available at https://github.com/RLHFlow/GVM.
Problem

Research questions and friction points this paper is trying to address.

Optimizes Chain-of-Thought reasoning via gradient variance minimization
Addresses inefficient stochastic gradient estimation in CoT training
Proposes dynamic sample allocation for faster convergence and accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Sample Allocation Strategy minimizes gradient variance
Monitors prompt acceptance rates and gradient norms
Accelerates convergence with computational budget constraints
๐Ÿ”Ž Similar Papers
No similar papers found.
Jiarui Yao
Jiarui Yao
CS, UIUC
Reinforcement LearningMachine LearningLarge Language Models
Y
Yifan Hao
University of Illinois Urbana-Champaign
Hanning Zhang
Hanning Zhang
University of Illinois at Urbana-Champaign
Natural Language ProcessingLarge Language Models
Hanze Dong
Hanze Dong
Microsoft Research
Machine LearningDeep LearningReinforcement Learning
W
Wei Xiong
University of Illinois Urbana-Champaign
N
Nan Jiang
University of Illinois Urbana-Champaign
T
Tong Zhang
University of Illinois Urbana-Champaign