🤖 AI Summary
To address the limited improvement in reasoning capabilities of small language models (SLMs) under reinforcement learning with verifiable rewards (RLVR), this paper proposes G²RPO-A: an adaptive guided group relative policy optimization algorithm. G²RPO-A injects ground-truth reasoning steps into rollout trajectories and dynamically modulates the guidance strength to align with training progression—thereby overcoming the saturation issue inherent in fixed-strength guidance. Evaluated on mathematical reasoning and code generation tasks, G²RPO-A significantly outperforms standard GRPO, boosting SLM performance by 5.2–9.7 percentage points across multiple benchmarks (e.g., GSM8K, HumanEval). Crucially, it demonstrates—for the first time—that lightweight models can achieve reasoning quality comparable to large language models (LLMs) through structured, adaptive reasoning guidance. This work establishes a scalable, parameter-efficient pathway for enhancing reasoning in resource-constrained models without architectural modification or external tool integration.
📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has markedly enhanced the reasoning abilities of large language models (LLMs). Its success, however, largely depends on strong base models with rich world knowledge, yielding only modest improvements for small-size language models (SLMs). To address this limitation, we investigate Guided GRPO, which injects ground-truth reasoning steps into roll-out trajectories to compensate for SLMs' inherent weaknesses. Through a comprehensive study of various guidance configurations, we find that naively adding guidance delivers limited gains. These insights motivate G$^2$RPO-A, an adaptive algorithm that automatically adjusts guidance strength in response to the model's evolving training dynamics. Experiments on mathematical reasoning and code-generation benchmarks confirm that G$^2$RPO-A substantially outperforms vanilla GRPO. Our code and models are available at https://github.com/T-Lab-CUHKSZ/G2RPO-A.