Can GRPO Help LLMs Transcend Their Pretraining Origin?

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
GRPO exhibits domain-dependent efficacy in enhancing LLM reasoning—yielding substantial gains on mathematical tasks yet stagnating on medical ones—raising critical questions about its out-of-distribution (OOD) generalization capabilities. Method: We formulate a verifiable reward-based RL framework, train Transformers from scratch, conduct controlled ablation experiments, and complement empirical analysis with theoretical proofs. Contribution/Results: We establish that GRPO functions fundamentally as a conservative reweighting mechanism, reinforcing only patterns within the pretraining distribution without transcending inherent model biases. Crucially, OOD performance improvement occurs *only* when task objectives align with pretraining preferences; otherwise, GRPO remains bounded by the base model’s distributional support. This work provides the first rigorous theoretical characterization of GRPO’s capability limits, offering principled guidelines for designing trustworthy, robust reasoning-augmentation algorithms.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR), primarily driven by the Group Relative Policy Optimization (GRPO) algorithm, is a leading approach for enhancing the reasoning abilities of Large Language Models (LLMs). Despite its wide adoption, GRPO's gains are often inconsistent; for instance, a model may show significant improvement in one reasoning domain, like mathematics, yet remain stagnant in another, such as medicine. This inconsistency raises a critical question: under what conditions does GRPO improve reasoning and generalize out-of-distribution (OOD)? We investigate this from a data distribution perspective. We first prove theoretically that GRPO is a conservative reweighting scheme, bounded by the base model's distribution and thus unable to discover completely novel solutions. We further validate this in carefully designed controlled studies by training transformers from scratch, evaluating generalization across reasoning depth, input length, token representation, and compositionality. Our results provide a principled explanation for GRPO's boundaries: OOD improvement emerges only when the target task aligns with the model's pretrained biases, while gains on in-distribution (ID) tasks diminish as performance saturates. This reframes GRPO not as a universal reasoning enhancer but as a tool that sharpens pretraining biases. Our findings motivate future development of algorithms that can expand a model's capabilities beyond its pretraining origin.
Problem

Research questions and friction points this paper is trying to address.

GRPO's inconsistent reasoning improvements across domains require explanation
Theoretical analysis shows GRPO cannot discover novel solutions beyond pretraining
OOD generalization only occurs when aligned with pretrained model biases
Innovation

Methods, ideas, or system contributions that make the work stand out.

GRPO algorithm enhances reasoning via verifiable rewards
Conservative reweighting bounded by pretraining data distribution
Improves tasks aligned with model's pretraining biases
🔎 Similar Papers
2024-05-21arXiv.orgCitations: 67