GRAM: A Generative Foundation Reward Model for Reward Generalization

📅 2025-06-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing discriminative reward models suffer from poor generalization and heavy reliance on large-scale human preference annotations. Method: This paper introduces the first Generative Foundation Reward Model (GFRM), pioneering generative modeling in reward learning. GFRM integrates large-scale unsupervised pretraining with supervised fine-tuning, and theoretically establishes that label smoothing is equivalent to regularizing pairwise ranking loss—thereby unifying generative and discriminative objectives. Crucially, GFRM enables zero- or few-shot cross-task reward generalization without task-specific adaptation. Results: Empirical evaluation demonstrates that GFRM significantly outperforms strong baselines across response ranking, RLHF, and multi-task reward adaptation. It achieves plug-and-play, general-purpose reward modeling—marking a paradigm shift from task-specific discriminative approaches to foundation-model-based generative reward learning.

Technology Category

Application Category

📝 Abstract
In aligning large language models (LLMs), reward models have played an important role, but are standardly trained as discriminative models and rely only on labeled human preference data. In this paper, we explore methods that train reward models using both unlabeled and labeled data. Building on the generative models in LLMs, we develop a generative reward model that is first trained via large-scale unsupervised learning and then fine-tuned via supervised learning. We also show that by using label smoothing, we are in fact optimizing a regularized pairwise ranking loss. This result, in turn, provides a new view of training reward models, which links generative models and discriminative models under the same class of training objectives. The outcome of these techniques is a foundation reward model, which can be applied to a wide range of tasks with little or no further fine-tuning effort. Extensive experiments show that this model generalizes well across several tasks, including response ranking, reinforcement learning from human feedback, and task adaptation with fine-tuning, achieving significant performance improvements over several strong baseline models.
Problem

Research questions and friction points this paper is trying to address.

Training reward models with unlabeled and labeled data
Linking generative and discriminative models via label smoothing
Creating a foundation reward model for diverse tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative reward model trained via unsupervised learning
Fine-tuned with supervised learning and label smoothing
Links generative and discriminative models via regularization
🔎 Similar Papers
No similar papers found.
C
Chenglong Wang
School of Computer Science and Engineering, Northeastern University, Shenyang, China
Y
Yang Gan
School of Computer Science and Engineering, Northeastern University, Shenyang, China
Yifu Huo
Yifu Huo
Northeastern University
Yongyu Mu
Yongyu Mu
Northeastern University
multilingualismmachine translationefficient models
Qiaozhi He
Qiaozhi He
ByteDance
LLMNatural Language Processing
M
Murun Yang
School of Computer Science and Engineering, Northeastern University, Shenyang, China
Bei Li
Bei Li
Meituan LLM Team
Machine TranslationDeep LearningLarge Language Models
T
Tong Xiao
School of Computer Science and Engineering, Northeastern University, Shenyang, China; NiuTrans Research, Shenyang, China
C
Chunliang Zhang
School of Computer Science and Engineering, Northeastern University, Shenyang, China
T
Tongran Liu
CAS Key Laboratory of Behavioral Science, Institute of Psychology, CAS, Beijing, China
Jingbo Zhu
Jingbo Zhu
Northeastern University, China
Machine TranslationLanguage ParsingNatural Language Processing