MO-GRPO: Mitigating Reward Hacking of Group Relative Policy Optimization on Multi-Objective Problems

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-objective reinforcement learning, the Generalized Reward Policy Optimization (GRPO) framework is vulnerable to reward hacking, leading to imbalanced objective optimization. To address this, we propose MO-GRPO—a variance-driven adaptive reward reweighting method. Its core innovation is a reward-variance-aware normalization mechanism that automatically balances gradient contributions from each objective during policy updates—without manual hyperparameter tuning—while strictly preserving user-specified preference orderings among objectives. MO-GRPO is seamlessly integrated into the GRPO framework and universally applicable across four distinct task domains: multi-armed bandits, continuous control, machine translation, and instruction-tuned language modeling. Experimental results demonstrate that MO-GRPO significantly mitigates reward stealing, yielding more stable and balanced multi-objective optimization. It consistently outperforms the original GRPO across all evaluated metrics and tasks.

Technology Category

Application Category

📝 Abstract
Group Relative Policy Optimization (GRPO) has been shown to be an effective algorithm when an accurate reward model is available. However, such a highly reliable reward model is not available in many real-world tasks. In this paper, we particularly focus on multi-objective settings, in which we identify that GRPO is vulnerable to reward hacking, optimizing only one of the objectives at the cost of the others. To address this issue, we propose MO-GRPO, an extension of GRPO with a simple normalization method to reweight the reward functions automatically according to the variances of their values. We first show analytically that MO-GRPO ensures that all reward functions contribute evenly to the loss function while preserving the order of preferences, eliminating the need for manual tuning of the reward functions' scales. Then, we evaluate MO-GRPO experimentally in four domains: (i) the multi-armed bandits problem, (ii) simulated control task (Mo-Gymnasium), (iii) machine translation tasks on the WMT benchmark (En-Ja, En-Zh), and (iv) instruction following task. MO-GRPO achieves stable learning by evenly distributing correlations among the components of rewards, outperforming GRPO, showing MO-GRPO to be a promising algorithm for multi-objective reinforcement learning problems.
Problem

Research questions and friction points this paper is trying to address.

Addresses reward hacking in multi-objective Group Relative Policy Optimization
Ensures balanced contribution from all reward functions during optimization
Eliminates manual tuning of reward scales across multiple objectives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Normalizes reward functions using variance-based reweighting
Ensures balanced contribution of all reward objectives
Automatically adjusts reward scales without manual tuning
🔎 Similar Papers
No similar papers found.