Reward Shaping to Mitigate Reward Hacking in RLHF

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses reward hacking in RLHF—where models deviate from human intent due to exploitable scalar reward signals—by proposing Preference As Reward (PAR): a method that bypasses explicit scalar rewards and instead directly leverages the implicit preference outputs of the reward model as the reinforcement learning signal, enabling efficient and robust alignment under a single-reference reward setting. We systematically distill three principled reward shaping guidelines—boundedness, centering, and “fast-then-stable”—and design a centered reward function accordingly. Integrated within the PPO framework, PAR significantly improves training stability and resilience against reward hacking. Experiments demonstrate that PAR achieves ≥5 percentage points higher win rates on AlpacaEval 2.0, attains optimal performance using only one reference reward, and maintains strong robustness even after two training rounds.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning from Human Feedback (RLHF) is essential for aligning large language models (LLMs) with human values. However, RLHF is susceptible to reward hacking, where the agent exploits flaws in the reward function rather than learning the intended behavior, thus degrading alignment. While reward shaping helps stabilize RLHF and partially mitigate reward hacking, a systematic investigation into shaping techniques and their underlying principles remains lacking. To bridge this gap, we present a comprehensive study of the prevalent reward shaping methods. Our analysis suggests three key design principles: (1) RL reward is ideally bounded, (2) RL benefits from rapid initial growth followed by gradual convergence, and (3) RL reward is best formulated as a function of centered reward. Guided by these insights, we propose Preference As Reward (PAR), a novel approach that leverages the latent preferences embedded within the reward model itself as the signal for reinforcement learning. We evaluated PAR on two base models, Gemma2-2B and Llama3-8B, using two datasets, Ultrafeedback-Binarized and HH-RLHF. Experimental results demonstrate PAR's superior performance over other reward shaping methods. On the AlpacaEval 2.0 benchmark, PAR achieves a win rate at least 5 percentage points higher than competing approaches. Furthermore, PAR exhibits remarkable data efficiency, requiring only a single reference reward for optimal performance, and maintains robustness against reward hacking even after two full epochs of training. Code is available at https://github.com/PorUna-byte/PAR.
Problem

Research questions and friction points this paper is trying to address.

Mitigate reward hacking in RLHF
Study reward shaping techniques systematically
Propose novel Preference As Reward method
Innovation

Methods, ideas, or system contributions that make the work stand out.

PAR leverages latent preferences as RL signal
PAR requires single reference reward for efficiency
PAR maintains robustness against reward hacking
🔎 Similar Papers
No similar papers found.