Scaling Reward Modeling without Human Supervision

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
提出无监督奖励建模方法RBS,利用网页语料前缀后缀偏好学习,无需人工标注即可提升模型在数学及安全等任务上的性能。

Technology Category

Application Category

📝 Abstract
Learning from feedback is an instrumental process for advancing the capabilities and safety of frontier models, yet its effectiveness is often constrained by cost and scalability. We present a pilot study that explores scaling reward models through unsupervised approaches. We operationalize reward-based scaling (RBS), in its simplest form, as preference learning over document prefixes and suffixes drawn from large-scale web corpora. Its advantage is demonstrated in various aspects: despite using no human annotations, training on 11M tokens of math-focused web data yields steady gains on RewardBench v1 and v2, and these improvements consistently transfer across diverse initialization backbones spanning model families and scales. Across models, our method improves RewardBench v2 accuracy by up to +7.7 points on average, with gains of up to +16.1 on in-domain math subsets and consistent improvements on out-of-domain safety and general subsets. When applied to best-of-N selection and policy optimization, these reward models substantially improve downstream math performance and match or exceed strong supervised reward model baselines of similar size. Overall, we demonstrate the feasibility and promise of training reward models without costly and potentially unreliable human annotations.
Problem

Research questions and friction points this paper is trying to address.

reward modeling
unsupervised learning
human feedback
scalability
preference learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

unsupervised reward modeling
reward-based scaling
preference learning
human-free feedback
scalable alignment
🔎 Similar Papers
No similar papers found.