🤖 AI Summary
Reward models trained via Reinforcement Learning from Human Feedback (RLHF) often exhibit length bias—erroneously conflating response length with quality, thereby favoring verbose outputs. To address this, we propose a causal inference–based debiasing framework that explicitly disentangles length and quality confounding effects. Specifically, we construct two types of counterfactual sample pairs: (1) semantically identical responses with substantially different lengths, and (2) responses of similar length but with markedly distinct semantics. Leveraging causal modeling, we guide counterfactual data augmentation to train a length-robust reward model. Experiments demonstrate that our method significantly reduces the sensitivity of reward scores to response length while improving discriminative capability and robustness in quality assessment. Consequently, downstream policy models generate more concise, information-dense responses. These results validate the efficacy and practicality of causal reasoning in reward modeling.
📝 Abstract
Reinforcement learning from human feedback (RLHF) is widely used to align large language models (LLMs) with human preferences. However, RLHF-trained reward models often exhibit length bias -- a systematic tendency to favor longer responses by conflating verbosity with quality. We propose a causal framework for analyzing and mitigating length bias in RLHF reward modeling. Central to our approach is a counterfactual data augmentation method that generates response pairs designed to isolate content quality from verbosity. These counterfactual examples are then used to train the reward model, enabling it to assess responses based on content quality independently of verbosity. Specifically, we construct (1) length-divergent pairs with similar content and (2) content-divergent pairs of similar length. Empirical evaluations show that our method reduces length bias in reward assignment and leads to more concise, content-focused outputs from the policy model. These findings demonstrate that the proposed approach effectively reduces length bias and improves the robustness and content sensitivity of reward modeling in RLHF pipelines.