MURPHY: Multi-Turn GRPO for Self Correcting Code Generation

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing GRPO methods perform well on static reasoning benchmarks but underperform in agent-oriented tasks requiring multi-step iterative decision-making. This paper proposes the Multi-Round Guided Reinforcement Policy Optimization (MR-GRPO) framework—the first extension of GRPO to multi-turn interactive training settings. MR-GRPO introduces an execution-feedback-driven self-correction mechanism, integrating verifiable rewards with quantitative and qualitative hybrid feedback to enable continuous refinement of reasoning trajectories. Crucially, it incurs no additional computational overhead and achieves up to an 8% absolute improvement in pass@1 for code generation on models including Qwen and OLMo, consistently outperforming single-round GRPO. Key contributions are: (1) multi-round policy iteration with dynamic trajectory correction; (2) fine-grained reinforcement signals derived from executable outcomes; and (3) a closed-loop training paradigm—spanning reflection, correction, and verification—across interaction turns.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a powerful framework for enhancing the reasoning capabilities of large language models (LLMs). However, existing approaches such as Group Relative Policy Optimization (GRPO) and its variants, while effective on reasoning benchmarks, struggle with agentic tasks that require iterative decision-making. We introduce Murphy, a multi-turn reflective optimization framework that extends GRPO by incorporating iterative self-correction during training. By leveraging both quantitative and qualitative execution feedback, Murphy enables models to progressively refine their reasoning across multiple turns. Evaluations on code generation benchmarks with model families such as Qwen and OLMo show that Murphy consistently improves performance, achieving up to a 8% relative gain in pass@1 over GRPO, on similar compute budgets.
Problem

Research questions and friction points this paper is trying to address.

Extends GRPO framework for iterative self-correction during training
Addresses agentic tasks requiring multi-turn decision-making in LLMs
Improves code generation through progressive reasoning refinement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-turn GRPO framework for self-correction
Leverages quantitative and qualitative execution feedback
Enables progressive reasoning refinement across turns
🔎 Similar Papers
No similar papers found.