🤖 AI Summary
Existing reward modeling approaches rely on costly explicit human feedback and struggle to learn unbiased reward signals from implicit user interactions such as clicks or copies, primarily due to the absence of negative samples and inherent user behavior biases. To address these challenges, this work proposes ImplicitRM, a novel method that categorizes implicit feedback into four latent groups through hierarchical modeling and derives a theoretically unbiased learning objective via likelihood maximization. ImplicitRM is the first approach capable of effectively learning an unbiased reward model solely from implicit preference data. Both theoretical analysis and extensive experiments demonstrate its effectiveness and strong generalization across multiple real-world datasets.
📝 Abstract
Reward modeling represents a long-standing challenge in reinforcement learning from human feedback (RLHF) for aligning language models. Current reward modeling is heavily contingent upon experimental feedback data with high collection costs. In this work, we study \textit{implicit reward modeling} -- learning reward models from implicit human feedback (e.g., clicks and copies) -- as a cost-effective alternative. We identify two fundamental challenges in implicit reward modeling: (1) Implicit preference data lacks definitive negative samples, which makes standard positive-negative classification methods inapplicable; (2) Implicit preference data suffers from user preference bias, where different responses have different propensities to elicit user feedback actions, which exacerbates the difficulty of distinguishing definitive negative samples. To address these challenges, we propose ImplicitRM, which aims to learn unbiased reward models from implicit preference data. ImplicitRM stratifies training samples into four latent groups via a stratification model. Building on this, it derives a learning objective through likelihood maximization, which we prove is theoretically unbiased, effectively resolving both challenges. Experiments demonstrate that ImplicitRM learns accurate reward models across implicit preference datasets. Code is available on our project website.