RiskPO: Risk-based Policy Optimization via Verifiable Reward for LLM Post-Training

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Mean-based reinforcement learning methods (e.g., GRPO) for post-training large language models suffer from entropy collapse and diminished reasoning diversity. Method: We propose a risk-aware policy optimization framework featuring (i) a hybrid Conditional Value-at-Risk (CRV) objective that weights multiple regions of the reward distribution to explicitly model challenging samples, and (ii) a problem-bundling mechanism with sequence-level gradient aggregation, leveraging verifiable reward signals to improve training stability. Contribution/Results: We provide theoretical guarantees showing that our approach mitigates policy degradation and enhances exploration. Empirical evaluation across mathematical reasoning, multimodal reasoning, and code generation demonstrates consistent and significant improvements over GRPO and its variants in both Pass@1 and Pass@k metrics, validating the efficacy of risk-sensitive modeling for enhancing complex reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Reinforcement learning with verifiable reward has recently emerged as a central paradigm for post-training large language models (LLMs); however, prevailing mean-based methods, such as Group Relative Policy Optimization (GRPO), suffer from entropy collapse and limited reasoning gains. We argue that these issues stem from overemphasizing high-probability output sequences while neglecting rare but informative reasoning paths. To address these challenges, we propose Risk-based Policy Optimization (RiskPO), which substitutes classical mean-based objectives with principled risk measures. Specifically, we introduce a Mixed Value-at-Risk objective that integrates weighted attention over multiple regions of the reward distribution, thereby amplifying gradient signals on challenging instances and preventing overconfident convergence. We further design a bundling scheme that aggregates multiple questions into bundles, thus enriching the feedback signal and yielding more stable and informative training dynamics. Theoretically, we prove that the risk-averse update alleviates entropy collapse and promotes exploration. Numerically, RiskPO achieves consistent and significant improvements in mathematical reasoning, multi-modal reasoning, and code generation benchmarks, surpassing GRPO and its variants on both Pass@1 and Pass@k metrics. Our results demonstrate that risk-based optimization provides a rigorous and effective paradigm for enhancing LLM reasoning capabilities.
Problem

Research questions and friction points this paper is trying to address.

Addresses entropy collapse in LLM post-training optimization
Enhances exploration of rare but informative reasoning paths
Improves mathematical and multi-modal reasoning performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Risk-based optimization replaces mean-based objectives
Mixed Value-at-Risk amplifies gradients on challenging instances
Bundling scheme aggregates questions for stable training
🔎 Similar Papers
No similar papers found.