🤖 AI Summary
This work addresses the tendency of existing generative reward models to rely on guesswork rather than sound reasoning in binary preference tasks, which introduces noisy reward signals and limits reinforcement learning performance. To overcome this, the authors propose RM-NLHF, a novel approach that leverages natural language human feedback as a process-level reward signal. By computing the semantic similarity between model-generated critiques and human-provided critiques, RM-NLHF enables finer-grained supervision. Furthermore, a meta reward model (MetaRM) is introduced to facilitate knowledge transfer from feedback-supervised to feedback-free data, effectively alleviating the constrained solution space inherent in outcome-only supervision. Experiments across multiple benchmarks demonstrate that RM-NLHF significantly outperforms state-of-the-art generative reward models that rely solely on outcome-based supervision, thereby validating the efficacy and superiority of natural language feedback as a process-oriented supervisory signal.
📝 Abstract
Reinforcement Learning with Verifiable reward (RLVR) on preference data has become the mainstream approach for training Generative Reward Models (GRMs). Typically in pairwise rewarding tasks, GRMs generate reasoning chains ending with critiques and preference labels, and RLVR then relies on the correctness of the preference labels as the training reward. However, in this paper, we demonstrate that such binary classification tasks make GRMs susceptible to guessing correct outcomes without sound critiques. Consequently, these spurious successes introduce substantial noise into the reward signal, thereby impairing the effectiveness of reinforcement learning. To address this issue, we propose Reward Modeling from Natural Language Human Feedback (RM-NLHF), which leverages natural language feedback to obtain process reward signals, thereby mitigating the problem of limited solution space inherent in binary tasks. Specifically, we compute the similarity between GRM-generated and human critiques as the training reward, which provides more accurate reward signals than outcome-only supervision. Additionally, considering that human critiques are difficult to scale up, we introduce Meta Reward Model (MetaRM) which learns to predict process reward from datasets with human critiques and then generalizes to data without human critiques. Experiments on multiple benchmarks demonstrate that our method consistently outperforms state-of-the-art GRMs trained with outcome-only reward, confirming the superiority of integrating natural language over binary human feedback as supervision.