Skywork-Reward-V2: Scaling Preference Data Curation via Human-AI Synergy

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing open-source reward models (RMs) underperform on mainstream benchmarks, primarily due to small-scale, narrow-coverage, and low-quality preference datasets—often synthetically generated or lacking rigorous quality control—thus failing to capture the complexity of human preferences. To address this, we propose a human-AI collaborative, two-stage data construction paradigm that synergizes human annotators’ high reliability with large language models’ strong scalability, yielding SynPref-40M, a high-quality, large-scale preference dataset containing 40 million preference pairs. Leveraging SynPref-40M, we train eight RMs spanning 0.6B to 8B parameters, achieving state-of-the-art performance across seven authoritative benchmarks. Our results demonstrate that simultaneously scaling dataset size and enhancing annotation quality via human-AI collaboration is critical for improving RM generalization and human alignment.

Technology Category

Application Category

📝 Abstract
Despite the critical role of reward models (RMs) in reinforcement learning from human feedback (RLHF), current state-of-the-art open RMs perform poorly on most existing evaluation benchmarks, failing to capture the spectrum of nuanced and sophisticated human preferences. Even approaches that incorporate advanced training techniques have not yielded meaningful performance improvements. We hypothesize that this brittleness stems primarily from limitations in preference datasets, which are often narrowly scoped, synthetically labeled, or lack rigorous quality control. To address these challenges, we present a large-scale preference dataset comprising 40 million preference pairs, named SynPref-40M. To enable data curation at scale, we design a human-AI synergistic two-stage pipeline that leverages the complementary strengths of human annotation quality and AI scalability. In this pipeline, humans provide verified annotations, while large language models perform automatic curation based on human guidance. Training on this preference mixture, we introduce Skywork-Reward-V2, a suite of eight reward models ranging from 0.6B to 8B parameters, trained on a carefully curated subset of 26 million preference pairs from SynPref-40M. We demonstrate that Skywork-Reward-V2 is versatile across a wide range of capabilities, including alignment with human preferences, objective correctness, safety, resistance to stylistic biases, and best-of-N scaling, achieving state-of-the-art performance across seven major reward model benchmarks. Ablation studies confirm that the effectiveness of our approach stems not only from data scale but also from high-quality curation. The Skywork-Reward-V2 series represents substantial progress in open reward models, highlighting the untapped potential of existing preference datasets and demonstrating how human-AI curation synergy can unlock significantly higher data quality.
Problem

Research questions and friction points this paper is trying to address.

Improving reward models' performance on nuanced human preferences
Addressing limitations in narrow, synthetic, or low-quality preference datasets
Enhancing data curation via human-AI synergy for scalable quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-AI two-stage pipeline for data curation
Large-scale SynPref-40M preference dataset
Skywork-Reward-V2 suite with 0.6B-8B parameters
🔎 Similar Papers
No similar papers found.
Chris Yuhao Liu
Chris Yuhao Liu
University of California, Santa Cruz
post-trainingreward modelingreasoning
L
Liang Zeng
2050 Research, Skywork AI
Y
Yuzhen Xiao
2050 Research, Skywork AI
J
Jujie He
2050 Research, Skywork AI
Jiacai Liu
Jiacai Liu
Fudan University
reinforcement learning
C
Chaojie Wang
2050 Research, Skywork AI
R
Rui Yan
2050 Research, Skywork AI
W
Wei Shen
2050 Research, Skywork AI
Fuxiang Zhang
Fuxiang Zhang
Nanyang Technological University
Language ModelingReinforcement Learning
Jiacheng Xu
Jiacheng Xu
Nanyang Technological University
Reinforcement LearningLarge Language Model
Y
Yang Liu
2050 Research, Skywork AI
Y
Yahui Zhou
2050 Research, Skywork AI