🤖 AI Summary
Standard reward models are susceptible to spurious correlations induced by non-causal features such as response length and flattery, leading to reward hacking. This work proposes a causal disentangled representation learning framework that decomposes contextual embeddings into causal factors—used for reward prediction—and non-causal factors. For the first time, it integrates an adversarial gradient reversal mechanism to actively suppress the spurious association between non-causal factors and the reward signal. Evaluated on both mathematical reasoning and conversational tasks, the method substantially enhances the robustness of reward models, effectively mitigating length and flattery biases. Consequently, downstream reinforcement learning from human feedback (RLHF) performance surpasses current state-of-the-art baselines.
📝 Abstract
A reliable reward model is essential for aligning large language models with human preferences through reinforcement learning from human feedback. However, standard reward models are susceptible to spurious features that are not causally related to human labels. This can lead to reward hacking, where high predicted reward does not translate into better behavior. In this work, we address this problem from a causal perspective by proposing a factored representation learning framework that decomposes the model's contextual embedding into (1) causal factors that are sufficient for reward prediction and (2) non-causal factors that capture reward-irrelevant attributes such as length or sycophantic bias. The reward head is then constrained to depend only on the causal component. In addition, we introduce an adversarial head trained to predict reward from the non-causal factors, while applying gradient reversal to discourage them from encoding reward-relevant information. Experiments on both mathematical and dialogue tasks demonstrate that our method learns more robust reward models and consistently improves downstream RLHF performance over state-of-the-art baselines. Analyses on length and sycophantic bias further validate the effectiveness of our method in mitigating reward hacking behaviors.