VRM: Teaching Reward Models to Understand Authentic Human Preferences

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the susceptibility of traditional reward models to spurious correlations when aligning large language models with human preferences, which often leads to inaccurate preference representations. To overcome this limitation, the authors propose a Variational Reward Modeling (VRM) framework that jointly models the multi-objective trade-offs and semantic features inherent in human preference judgments as latent variables—specifically, high-dimensional objective weights and low-dimensional semantic features. Leveraging variational inference, VRM enables efficient learning of these latent structures. The approach not only provides a theoretically tighter generalization error bound but also demonstrates significant empirical improvements over existing methods across multiple benchmark datasets, more faithfully capturing authentic human preferences.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have achieved remarkable success across diverse natural language tasks, yet the reward models employed for aligning LLMs often encounter challenges of reward hacking, where the approaches predominantly rely on directly mapping prompt-response pairs to scalar scores, which may inadvertently capture spurious correlations rather than authentic human preferences. In contrast, human evaluation employs a sophisticated process that initially weighs the relative importance of multiple high-dimensional objectives according to the prompt context, subsequently evaluating response quality through low-dimensional semantic features such as logical coherence and contextual appropriateness. Motivated by this consideration, we propose VRM, i.e., Variational Reward Modeling, a novel framework that explicitly models the evaluation process of human preference judgments by incorporating both high-dimensional objective weights and low-dimensional semantic features as latent variables, which are inferred through variational inference techniques. Additionally, we provide a theoretical analysis showing that VRM can achieve a tighter generalization error bound compared to the traditional reward model. Extensive experiments on benchmark datasets demonstrate that VRM significantly outperforms existing methods in capturing authentic human preferences.
Problem

Research questions and friction points this paper is trying to address.

reward hacking
human preferences
reward models
large language models
preference alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Variational Reward Modeling
human preference modeling
latent variable inference
reward hacking mitigation
semantic feature representation
🔎 Similar Papers
No similar papers found.