MHPO: Modulated Hazard-aware Policy Optimization for Stable Reinforcement Learning

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing policy optimization methods struggle to effectively mitigate extreme policy shifts due to the non-differentiable boundaries of importance ratios and vanishing gradients, leading to unstable training. This work proposes the MHPO framework, which innovatively introduces a Log-Fidelity Modulator to map importance ratios into a bounded, differentiable domain and incorporates a Decoupled Hazard Penalty grounded in survival analysis. For the first time, this approach enables asymmetric and decoupled control over policy expansion and contraction. By preserving gradient fidelity and trust-region stability, MHPO significantly enhances training robustness. Experimental results demonstrate that MHPO consistently outperforms existing methods across diverse text and vision-language reasoning benchmarks, achieving simultaneous improvements in both performance and stability.

Technology Category

Application Category

📝 Abstract
Regulating the importance ratio is critical for the training stability of Group Relative Policy Optimization (GRPO) based frameworks. However, prevailing ratio control methods, such as hard clipping, suffer from non-differentiable boundaries and vanishing gradient regions, failing to maintain gradient fidelity. Furthermore, these methods lack a hazard-aware mechanism to adaptively suppress extreme deviations, leaving the optimization process vulnerable to abrupt policy shifts. To address these challenges, we propose Modulated Hazard-aware Policy Optimization (MHPO), a novel framework designed for robust and stable reinforcement learning. The proposed MHPO introduces a Log-Fidelity Modulator (LFM) to map unbounded importance ratios into a bounded, differentiable domain. This mechanism effectively prevents high-variance outlier tokens from destabilizing the loss landscape while ensuring global gradient stability. Complementarily, a Decoupled Hazard Penalty (DHP) integrates cumulative hazard functions from survival analysis to independently regulate positive and negative policy shifts. By shaping the optimization landscape with hazard-aware penalties, the proposed MHPO achieves fine-grained regulation of asymmetric policy shifts simultaneously mitigating mode collapse from over-expansion and preventing policy erosion from catastrophic contraction within a stabilized trust region. Extensive evaluations on diverse reasoning benchmarks across both text-based and vision-language tasks demonstrate that MHPO consistently outperforms existing methods, achieving superior performance while significantly enhancing training stability.
Problem

Research questions and friction points this paper is trying to address.

importance ratio regulation
training stability
hazard-aware mechanism
policy shift
gradient fidelity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modulated Hazard-aware Policy Optimization
Log-Fidelity Modulator
Decoupled Hazard Penalty
importance ratio regulation
stable reinforcement learning
🔎 Similar Papers
No similar papers found.