ConsistRM: Improving Generative Reward Models via Consistency-Aware Self-Training

📅 2026-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of generative reward modeling—namely, reliance on costly human annotations, instability in self-training, and vulnerability to reward hacking—by introducing ConsistRM, a consistency-aware self-training framework that enables stable and efficient reward model training without human labels. The approach leverages consistency-aware answer and critique rewards, integrating multi-critique semantic consistency evaluation with reinforcement fine-tuning to enhance pseudo-label reliability and reward granularity. This effectively mitigates positional bias and improves output consistency. Evaluated across four base models and five benchmark datasets, ConsistRM outperforms standard reward fine-tuning (RFT) by an average of 1.5%, demonstrating significant gains in both model stability and alignment performance.
📝 Abstract
Generative reward models (GRMs) have emerged as a promising approach for aligning Large Language Models (LLMs) with human preferences by offering greater representational capacity and flexibility than traditional scalar reward models. However, GRMs face two major challenges: reliance on costly human-annotated data restricts scalability, and self-training approaches often suffer from instability and vulnerability to reward hacking. To address these issues, we propose ConsistRM, a self-training framework that enables effective and stable GRM training without human annotations. ConsistRM incorporates the Consistency-Aware Answer Reward, which produces reliable pseudo-labels with temporal consistency, thereby providing more stable model optimization. Moreover, the Consistency-Aware Critique Reward is introduced to assess semantic consistency across multiple critiques and allocates fine-grained and differentiated rewards. Experiments on five benchmark datasets across four base models demonstrate that ConsistRM outperforms vanilla Reinforcement Fine-Tuning (RFT) by an average of 1.5%. Further analysis shows that ConsistRM enhances output consistency and mitigates position bias caused by input order, highlighting the effectiveness of consistency-aware rewards in improving GRMs.
Problem

Research questions and friction points this paper is trying to address.

Generative Reward Models
Human Preference Alignment
Self-Training
Reward Hacking
Scalability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative Reward Models
Self-Training
Consistency-Aware Reward
Reward Hacking Mitigation
Pseudo-Labeling
🔎 Similar Papers
No similar papers found.
Y
Yu Liang
Baidu Inc., Beijing, China
L
Liangxin Liu
Baidu Inc., Beijing, China
L
Longzheng Wang
Baidu Inc., Beijing, China
Y
Yan Wang
Baidu Inc., Beijing, China
Y
Yueyang Zhang
Baidu Inc., Beijing, China
Long Xia
Long Xia
Research Scientist, Baidu
information retrievaldata miningapplied machine learningrecommender system
Z
Zhiyuan Sun
Baidu Inc., Beijing, China
D
Daiting Shi
Baidu Inc., Beijing, China