Outcome Accuracy is Not Enough: Aligning the Reasoning Process of Reward Models

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the issue of “deceptive alignment” in existing reward models, where correct judgments are accompanied by flawed reasoning due to exclusive optimization for outcome accuracy, thereby limiting generalization in reinforcement learning from human feedback (RLHF). To mitigate this, the authors propose a “reasoning consistency” metric that quantifies the alignment between a model’s internal reasoning process and human judgment. This metric is combined with outcome accuracy to form a hybrid training signal for a generative reward model (GenRM). Notably, this approach introduces fine-grained reasoning consistency evaluation for the first time. Experimental results demonstrate that GenRM achieves state-of-the-art performance on RM-Bench (87.1%) and JudgeBench (82%), outperforming baselines by an average of 5%, and further improves creative writing performance by 7% on Arena Hard v2, significantly enhancing reasoning consistency.

Technology Category

Application Category

📝 Abstract
Generative Reward Models (GenRMs) and LLM-as-a-Judge exhibit deceptive alignment by producing correct judgments for incorrect reasons, as they are trained and evaluated to prioritize Outcome Accuracy, which undermines their ability to generalize during RLHF. We introduce Rationale Consistency, a fine-grained metric that quantifies the alignment between the model's reasoning process and human judgment. Our evaluation of frontier models reveals that rationale consistency effectively discriminates among state-of-the-art models and detects deceptive alignment, while outcome accuracy falls short in both respects. To mitigate this gap, we introduce a hybrid signal that combines rationale consistency with outcome accuracy for GenRM training. Our training method achieves state-of-the-art performance on RM-Bench (87.1%) and JudgeBench (82%), surpassing outcome-only baselines by an average of 5%. Using RM during RLHF, our method effectively improves performance as demonstrated on Arena Hard v2, notably yielding a 7% improvement in creative writing tasks. Further analysis confirms that our method escapes the deceptive alignment trap, effectively reversing the decline in rationale consistency observed in outcome-only training.
Problem

Research questions and friction points this paper is trying to address.

Outcome Accuracy
Deceptive Alignment
Reward Models
Reasoning Process
RLHF
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rationale Consistency
Deceptive Alignment
Generative Reward Models
Outcome Accuracy
RLHF
🔎 Similar Papers
No similar papers found.