🤖 AI Summary
Existing discriminative reward models suffer from poor generalization and heavy reliance on large-scale human preference annotations. Method: This paper introduces the first Generative Foundation Reward Model (GFRM), pioneering generative modeling in reward learning. GFRM integrates large-scale unsupervised pretraining with supervised fine-tuning, and theoretically establishes that label smoothing is equivalent to regularizing pairwise ranking loss—thereby unifying generative and discriminative objectives. Crucially, GFRM enables zero- or few-shot cross-task reward generalization without task-specific adaptation. Results: Empirical evaluation demonstrates that GFRM significantly outperforms strong baselines across response ranking, RLHF, and multi-task reward adaptation. It achieves plug-and-play, general-purpose reward modeling—marking a paradigm shift from task-specific discriminative approaches to foundation-model-based generative reward learning.
📝 Abstract
In aligning large language models (LLMs), reward models have played an important role, but are standardly trained as discriminative models and rely only on labeled human preference data. In this paper, we explore methods that train reward models using both unlabeled and labeled data. Building on the generative models in LLMs, we develop a generative reward model that is first trained via large-scale unsupervised learning and then fine-tuned via supervised learning. We also show that by using label smoothing, we are in fact optimizing a regularized pairwise ranking loss. This result, in turn, provides a new view of training reward models, which links generative models and discriminative models under the same class of training objectives. The outcome of these techniques is a foundation reward model, which can be applied to a wide range of tasks with little or no further fine-tuning effort. Extensive experiments show that this model generalizes well across several tasks, including response ranking, reinforcement learning from human feedback, and task adaptation with fine-tuning, achieving significant performance improvements over several strong baseline models.