CausalRM: Causal-Theoretic Reward Modeling for RLHF from Observational User Feedbacks

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of current reinforcement learning from human feedback (RLHF), which relies on costly human annotations and suffers from noisy, biased user behavioral signals—such as clicks or likes—that induce a mismatch between training and inference distributions. To overcome these challenges, we propose the first causally grounded observational reward modeling framework that jointly mitigates feedback noise and selection bias through a noise-aware surrogate loss and propensity score reweighting. The resulting optimization objective is provably equivalent to the ideal loss under noise-free conditions. Empirical evaluations demonstrate substantial performance gains of 49.2% and 32.7% over existing methods on the WildGuardMix and HarmBench benchmarks, respectively, with consistent improvements across multiple large language models.

Technology Category

Application Category

📝 Abstract
Despite the success of reinforcement learning from human feedback (RLHF) in aligning language models, current reward modeling heavily relies on experimental feedback data collected from human annotators under controlled and costly conditions. In this work, we introduce observational reward modeling -- learning reward models with observational user feedback (e.g., clicks, copies, and upvotes) -- as a scalable and cost-effective alternative. We identify two fundamental challenges in this setting: (1) observational feedback is noisy due to annotation errors, which deviates it from true user preference; (2) observational feedback is biased by user preference, where users preferentially provide feedback on responses they feel strongly about, which creats a distribution shift between training and inference data. To address these challenges, we propose CausalRM, a causal-theoretic reward modeling framework that aims to learn unbiased reward models from observational feedback. To tackle challenge (1), CausalRM introduces a noise-aware surrogate loss term that is provably equivalent to the primal loss under noise-free conditions by explicitly modeling the annotation error generation process. To tackle challenge (2), CausalRM uses propensity scores -- the probability of a user providing feedback for a given response -- to reweight training samples, yielding a loss function that eliminates user preference bias. Extensive experiments across diverse LLM backbones and benchmark datasets validate that CausalRM effectively learns accurate reward signals from noisy and biased observational feedback and delivers substantial performance improvements on downstream RLHF tasks -- including a 49.2% gain on WildGuardMix and a 32.7% improvement on HarmBench. Code is available on our project website.
Problem

Research questions and friction points this paper is trying to address.

observational feedback
reward modeling
noise
bias
distribution shift
Innovation

Methods, ideas, or system contributions that make the work stand out.

CausalRM
observational feedback
reward modeling
propensity scoring
noise-aware loss
🔎 Similar Papers
No similar papers found.