🤖 AI Summary
This work addresses the challenge of sparse rewards in reinforcement learning, where "hard examples" yield zero advantage estimates and thus lack effective supervision signals. The authors propose a novel co-optimization framework that jointly refines policies and prompts: by identifying hard examples during training, they employ a Genetic Evolutionary Pareto Algorithm (GEPA) to optimize prompt templates that guide large language models to generate successful trajectories, subsequently distilling the prompt-induced reasoning capabilities into the policy parameters. This approach represents the first method to integrate prompt optimization and policy learning within a unified training loop, moving beyond conventional prompt engineering paradigms that rely solely on input augmentation. Experimental results demonstrate state-of-the-art performance on in-distribution tasks and an average improvement of 4.7% on out-of-distribution benchmarks, significantly enhancing model generalization.
📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a powerful paradigm for enhancing the reasoning capabilities of Large Language Models (LLMs). However, vanilla RLVR suffers from inefficient exploration, particularly when confronting "hard samples" that yield nearzero success rates. In such scenarios, the reliance on sparse outcome rewards typically results in zero-advantage estimates, effectively starving the model of supervision signals despite the high informational value of these instances. To address this, we propose P^2O, a novel framework that synergizes Prompt Optimization with Policy Optimization. P^2O identifies hard samples during training iterations and leverages the GeneticPareto (GEPA) prompt optimization algorithm to evolve prompt templates that guide the model toward discovering successful trajectories. Crucially, unlike traditional prompt engineering methods that rely on input augmentation, P^2O distills the reasoning gains induced by these optimized prompts directly into the model parameters. This mechanism provides denser positive supervision signals for hard samples and accelerates convergence. Extensive experiments demonstrate that P^2O not only achieves superior performance on in-distribution datasets but also exhibits strong generalization, yielding substantial improvements on out-of-distribution benchmarks (+4.7% avg.).