Mitigating Length Bias in RLHF through a Causal Lens

📅 2025-11-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reward models trained via Reinforcement Learning from Human Feedback (RLHF) often exhibit length bias—erroneously conflating response length with quality, thereby favoring verbose outputs. To address this, we propose a causal inference–based debiasing framework that explicitly disentangles length and quality confounding effects. Specifically, we construct two types of counterfactual sample pairs: (1) semantically identical responses with substantially different lengths, and (2) responses of similar length but with markedly distinct semantics. Leveraging causal modeling, we guide counterfactual data augmentation to train a length-robust reward model. Experiments demonstrate that our method significantly reduces the sensitivity of reward scores to response length while improving discriminative capability and robustness in quality assessment. Consequently, downstream policy models generate more concise, information-dense responses. These results validate the efficacy and practicality of causal reasoning in reward modeling.

Technology Category

Application Category

📝 Abstract
Reinforcement learning from human feedback (RLHF) is widely used to align large language models (LLMs) with human preferences. However, RLHF-trained reward models often exhibit length bias -- a systematic tendency to favor longer responses by conflating verbosity with quality. We propose a causal framework for analyzing and mitigating length bias in RLHF reward modeling. Central to our approach is a counterfactual data augmentation method that generates response pairs designed to isolate content quality from verbosity. These counterfactual examples are then used to train the reward model, enabling it to assess responses based on content quality independently of verbosity. Specifically, we construct (1) length-divergent pairs with similar content and (2) content-divergent pairs of similar length. Empirical evaluations show that our method reduces length bias in reward assignment and leads to more concise, content-focused outputs from the policy model. These findings demonstrate that the proposed approach effectively reduces length bias and improves the robustness and content sensitivity of reward modeling in RLHF pipelines.
Problem

Research questions and friction points this paper is trying to address.

RLHF reward models exhibit length bias favoring verbose responses
Current methods conflate response length with actual content quality
Need to isolate content quality assessment from verbosity in RLHF
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causal framework mitigates length bias in RLHF
Counterfactual data augmentation isolates content quality
Training with length-content divergent pairs reduces verbosity bias
🔎 Similar Papers
No similar papers found.
H
Hyeonji Kim
Graduate School of Data Science, Seoul National University
S
Sujeong Oh
Graduate School of Data Science, Seoul National University
Sanghack Lee
Sanghack Lee
Seoul National University
Artificial IntelligenceMachine LearningCausal DiscoveryCausal Inference