P^2O: Joint Policy and Prompt Optimization

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of sparse rewards in reinforcement learning, where "hard examples" yield zero advantage estimates and thus lack effective supervision signals. The authors propose a novel co-optimization framework that jointly refines policies and prompts: by identifying hard examples during training, they employ a Genetic Evolutionary Pareto Algorithm (GEPA) to optimize prompt templates that guide large language models to generate successful trajectories, subsequently distilling the prompt-induced reasoning capabilities into the policy parameters. This approach represents the first method to integrate prompt optimization and policy learning within a unified training loop, moving beyond conventional prompt engineering paradigms that rely solely on input augmentation. Experimental results demonstrate state-of-the-art performance on in-distribution tasks and an average improvement of 4.7% on out-of-distribution benchmarks, significantly enhancing model generalization.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a powerful paradigm for enhancing the reasoning capabilities of Large Language Models (LLMs). However, vanilla RLVR suffers from inefficient exploration, particularly when confronting "hard samples" that yield nearzero success rates. In such scenarios, the reliance on sparse outcome rewards typically results in zero-advantage estimates, effectively starving the model of supervision signals despite the high informational value of these instances. To address this, we propose P^2O, a novel framework that synergizes Prompt Optimization with Policy Optimization. P^2O identifies hard samples during training iterations and leverages the GeneticPareto (GEPA) prompt optimization algorithm to evolve prompt templates that guide the model toward discovering successful trajectories. Crucially, unlike traditional prompt engineering methods that rely on input augmentation, P^2O distills the reasoning gains induced by these optimized prompts directly into the model parameters. This mechanism provides denser positive supervision signals for hard samples and accelerates convergence. Extensive experiments demonstrate that P^2O not only achieves superior performance on in-distribution datasets but also exhibits strong generalization, yielding substantial improvements on out-of-distribution benchmarks (+4.7% avg.).
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning with Verifiable Rewards
hard samples
inefficient exploration
sparse rewards
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prompt Optimization
Policy Optimization
Reinforcement Learning with Verifiable Rewards
GeneticPareto
Hard Sample Exploration
🔎 Similar Papers
No similar papers found.
Xinyu Lu
Xinyu Lu
The Chinese University of Hong Kong, Shenzhen
Generative Models for Decision MakingComputer ScienceSmart CityCyber-physical System
Kaiqi Zhang
Kaiqi Zhang
Syracuse University
Artificial IntelligenceDeep Learning
J
Jinglin Yang
Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100085, China; School of Cyber Security, University of Chinese Academy of Sciences, Beijing 100085, China
Boxi Cao
Boxi Cao
Institute of Software, Chinese Academy of Sciences
Natural Language Processing
Yaojie Lu
Yaojie Lu
Institute of Software, Chinese Academy of Sciences
Information ExtractionLarge Language Models
Hongyu Lin
Hongyu Lin
Institute of Software, Chinese Academy of Sciences
Natural Language ProcessingInformation Extraction and Machine Learning
M
Min He
National Computer Network Emergency Response Technical Team/Coordination Center of China, Beijing 100029, China
X
Xianpei Han
Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
Le Sun
Le Sun
Institute of Software, CAS
information_retrievalnatural_language_processing