🤖 AI Summary
This work addresses the "reward hacking" problem in reinforcement learning—where policies exploit inaccuracies in reward models by exhibiting unintended behaviors—by proposing a novel gradient regularization method that steers policy updates toward regions where the reward model is both more accurate and exhibits flatter optimization landscapes. The approach employs efficient finite-difference estimation for explicit regularization and integrates KL divergence penalties with a reference reset mechanism within both RLHF and RLVR frameworks for language model alignment. Experimental results demonstrate that this method consistently outperforms conventional KL-penalized strategies across multiple tasks, achieving higher win rates under GPT-based evaluation, mitigating format overfitting, and effectively preventing judge deception in LLM-as-a-Judge mathematical reasoning tasks. These findings further illuminate the theoretical connection between reward model accuracy and optimization flatness.
📝 Abstract
Reinforcement Learning from Human Feedback (RLHF) or Verifiable Rewards (RLVR) are two key steps in the post-training of modern Language Models (LMs). A common problem is reward hacking, where the policy may exploit inaccuracies of the reward and learn an unintended behavior. Most previous works address this by limiting the policy update with a Kullback-Leibler (KL) penalty towards a reference model. We propose a different framing: Train the LM in a way that biases policy updates towards regions in which the reward is more accurate. First, we derive a theoretical connection between the accuracy of a reward model and the flatness of an optimum at convergence. Gradient regularization (GR) can then be used to bias training to flatter regions and thereby maintain reward model accuracy. We confirm these results by showing that the gradient norm and reward accuracy are empirically correlated in RLHF. We then show that Reference Resets of the KL penalty implicitly use GR to find flatter regions with higher reward accuracy. We further improve on this by proposing to use explicit GR with an efficient finite-difference estimate. Empirically, GR performs better than a KL penalty across a diverse set of RL experiments with LMs. GR achieves a higher GPT-judged win-rate in RLHF, avoids overly focusing on the format in rule-based math rewards, and prevents hacking the judge in LLM-as-a-Judge math tasks.