🤖 AI Summary
Existing GRPO methods perform well on static reasoning benchmarks but underperform in agent-oriented tasks requiring multi-step iterative decision-making. This paper proposes the Multi-Round Guided Reinforcement Policy Optimization (MR-GRPO) framework—the first extension of GRPO to multi-turn interactive training settings. MR-GRPO introduces an execution-feedback-driven self-correction mechanism, integrating verifiable rewards with quantitative and qualitative hybrid feedback to enable continuous refinement of reasoning trajectories. Crucially, it incurs no additional computational overhead and achieves up to an 8% absolute improvement in pass@1 for code generation on models including Qwen and OLMo, consistently outperforming single-round GRPO. Key contributions are: (1) multi-round policy iteration with dynamic trajectory correction; (2) fine-grained reinforcement signals derived from executable outcomes; and (3) a closed-loop training paradigm—spanning reflection, correction, and verification—across interaction turns.
📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a powerful framework for enhancing the reasoning capabilities of large language models (LLMs). However, existing approaches such as Group Relative Policy Optimization (GRPO) and its variants, while effective on reasoning benchmarks, struggle with agentic tasks that require iterative decision-making. We introduce Murphy, a multi-turn reflective optimization framework that extends GRPO by incorporating iterative self-correction during training. By leveraging both quantitative and qualitative execution feedback, Murphy enables models to progressively refine their reasoning across multiple turns. Evaluations on code generation benchmarks with model families such as Qwen and OLMo show that Murphy consistently improves performance, achieving up to a 8% relative gain in pass@1 over GRPO, on similar compute budgets.