๐ค AI Summary
To address high gradient estimation variance and slow convergence in chain-of-thought (CoT) reasoning caused by static sampling, this paper proposes the first prompt-level dynamic sample allocation framework. Our method integrates rejection sampling, reinforcement learning (RAFT/GRPO), and stochastic optimization theory to dynamically allocate computational resources per prompt based on real-time acceptance rates and gradient normsโminimizing gradient variance under a fixed budget and providing theoretical convergence acceleration guarantees. Crucially, we formulate CoT training as a latent-variable optimization problem, abandoning uniform inference budgets in favor of difficulty-aware adaptive sampling. Empirical evaluation on mathematical reasoning tasks demonstrates 2โ4ร faster training and significant accuracy gains. The framework is plug-and-play, fully compatible with diverse RL-based CoT algorithms, and exhibits strong generalization across models and tasks.
๐ Abstract
Chain-of-thought (CoT) reasoning in large language models (LLMs) can be formalized as a latent variable problem, where the model needs to generate intermediate reasoning steps. While prior approaches such as iterative reward-ranked fine-tuning (RAFT) have relied on such formulations, they typically apply uniform inference budgets across prompts, which fails to account for variability in difficulty and convergence behavior. This work identifies the main bottleneck in CoT training as inefficient stochastic gradient estimation due to static sampling strategies. We propose GVM-RAFT, a prompt-specific Dynamic Sample Allocation Strategy designed to minimize stochastic gradient variance under a computational budget constraint. The method dynamically allocates computational resources by monitoring prompt acceptance rates and stochastic gradient norms, ensuring that the resulting gradient variance is minimized. Our theoretical analysis shows that the proposed dynamic sampling strategy leads to accelerated convergence guarantees under suitable conditions. Experiments on mathematical reasoning show that GVM-RAFT achieves a 2-4x speedup and considerable accuracy improvements over vanilla RAFT. The proposed dynamic sampling strategy is general and can be incorporated into other reinforcement learning algorithms, such as GRPO, leading to similar improvements in convergence and test accuracy. Our code is available at https://github.com/RLHFlow/GVM.