G$^2$RPO-A: Guided Group Relative Policy Optimization with Adaptive Guidance

📅 2025-08-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited improvement in reasoning capabilities of small language models (SLMs) under reinforcement learning with verifiable rewards (RLVR), this paper proposes G²RPO-A: an adaptive guided group relative policy optimization algorithm. G²RPO-A injects ground-truth reasoning steps into rollout trajectories and dynamically modulates the guidance strength to align with training progression—thereby overcoming the saturation issue inherent in fixed-strength guidance. Evaluated on mathematical reasoning and code generation tasks, G²RPO-A significantly outperforms standard GRPO, boosting SLM performance by 5.2–9.7 percentage points across multiple benchmarks (e.g., GSM8K, HumanEval). Crucially, it demonstrates—for the first time—that lightweight models can achieve reasoning quality comparable to large language models (LLMs) through structured, adaptive reasoning guidance. This work establishes a scalable, parameter-efficient pathway for enhancing reasoning in resource-constrained models without architectural modification or external tool integration.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has markedly enhanced the reasoning abilities of large language models (LLMs). Its success, however, largely depends on strong base models with rich world knowledge, yielding only modest improvements for small-size language models (SLMs). To address this limitation, we investigate Guided GRPO, which injects ground-truth reasoning steps into roll-out trajectories to compensate for SLMs' inherent weaknesses. Through a comprehensive study of various guidance configurations, we find that naively adding guidance delivers limited gains. These insights motivate G$^2$RPO-A, an adaptive algorithm that automatically adjusts guidance strength in response to the model's evolving training dynamics. Experiments on mathematical reasoning and code-generation benchmarks confirm that G$^2$RPO-A substantially outperforms vanilla GRPO. Our code and models are available at https://github.com/T-Lab-CUHKSZ/G2RPO-A.
Problem

Research questions and friction points this paper is trying to address.

Improving small language models' reasoning with guided steps
Adaptively adjusting guidance strength during model training
Enhancing performance in math and code-generation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Guided Group Relative Policy Optimization
Adaptive guidance strength adjustment
Injects ground-truth reasoning steps
🔎 Similar Papers
No similar papers found.
Y
Yongxin Guo
School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen
W
Wenbo Deng
School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen
Zhenglin Cheng
Zhenglin Cheng
Zhejiang University & Westlake University, SII
Multimodal LearningDiffusion Models
X
Xiaoying Tang
School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen