🤖 AI Summary
This work addresses the limitations of existing alignment methods, which under KL regularization often inherit preference biases from the base policy and are prone to reward hacking due to reward amplification, thereby failing to maximize user utility. The authors propose a novel reward shaping mechanism grounded in a game-theoretic perspective, modeling reward model optimization as a Stackelberg game for the first time. This approach is low-overhead, seamlessly integrable, and approximates an optimal reward model during inference-time alignment, effectively balancing bias mitigation against the risk of reward manipulation. Experimental results across multiple evaluation environments demonstrate that the method achieves an average win rate exceeding 66%, significantly improves average reward, and consistently outperforms current baselines.
📝 Abstract
Existing alignment methods directly use the reward model learned from user preference data to optimize an LLM policy, subject to KL regularization with respect to the base policy. This practice is suboptimal for maximizing user's utility because the KL regularization may cause the LLM to inherit the bias in the base policy that conflicts with user preferences. While amplifying rewards for preferred outputs can mitigate this bias, it also increases the risk of reward hacking. This tradeoff motivates the problem of optimally designing reward models under KL regularization. We formalize this reward model optimization problem as a Stackelberg game, and show that a simple reward shaping scheme can effectively approximate the optimal reward model. We empirically evaluate our method in inference-time alignment settings and demonstrate that it integrates seamlessly into existing alignment methods with minimal overhead. Our method consistently improves average reward and achieves win-tie rates exceeding 66% against all baselines, averaged across evaluation settings.