Rethinking Reward Model Evaluation: Are We Barking up the Wrong Tree?

📅 2024-10-08
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Current reward modeling (RM) evaluation relies heavily on validation-set preference accuracy, yet this metric exhibits weak correlation with downstream reinforcement learning (RL) policy performance and thus fails to reliably predict optimization outcomes. Method: We construct a controlled synthetic environment and conduct systematic analyses—including correlation studies, error attribution, and theoretical modeling of Goodhart’s law—to investigate the validity of accuracy as a proxy for RM quality. Contribution/Results: We identify, for the first time, that RM accuracy is compromised by a “regressive Goodhart effect”: apparent improvements often stem from overfitting to annotation noise rather than enhanced true preference modeling. Experiments demonstrate that RMs with similar accuracy scores can yield substantially divergent RL policy performance, and that accuracy’s proxy validity critically depends on data distribution and annotation quality. Our findings challenge the accuracy-centric RM evaluation paradigm and provide both theoretical grounding and empirical evidence for developing more robust RM assessment frameworks.

Technology Category

Application Category

📝 Abstract
Reward Models (RMs) are crucial for aligning language models with human preferences. Currently, the evaluation of RMs depends on measuring accuracy against a validation set of manually annotated preference data. Although this method is straightforward and widely adopted, the relationship between RM accuracy and downstream policy performance remains under-explored. In this work, we conduct experiments in a synthetic setting to investigate how differences in RM measured by accuracy translate into gaps in optimized policy performance. Our findings reveal that while there is a weak positive correlation between accuracy and downstream performance, policies optimized towards RMs with similar accuracy can exhibit quite different performance. Moreover, we discover that the way of measuring accuracy significantly impacts its ability to predict the final policy performance. Through the lens of the Regressional Goodhart effect, we recognize that accuracy, when used for measuring RM quality, can fail to fully capture the potential RM overoptimization. This underscores the inadequacy of relying solely on accuracy to reflect their impact on policy optimization.
Problem

Research questions and friction points this paper is trying to address.

Evaluating Reward Models accuracy
Relationship between RM accuracy and policy performance
Impact of accuracy measurement on policy optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Investigates reward model accuracy impact.
Explores policy performance correlation differences.
Highlights accuracy measurement limitations effects.
Xueru Wen
Xueru Wen
School of Computer Science and Technology, University of Chinese Academy of Sciences
Natural Language ProcessingAlignmentLarge Language Model
Jie Lou
Jie Lou
Xiaohongshu
AlignmentRLHF
Yaojie Lu
Yaojie Lu
Institute of Software, Chinese Academy of Sciences
Information ExtractionLarge Language Models
H
Hongyu Lin
Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences
X
Xing Yu
Xiaohongshu Inc
X
Xinyu Lu
Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Ben He
Ben He
Professor, University of Chinese Academy of Sciences
Natural Language ProcessingInformation Retrieval
X
Xianpei Han
Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences
Debing Zhang
Debing Zhang
Xiaohongshu
Machine LearningComputer VisionDeep Learning
Le Sun
Le Sun
Institute of Software, CAS
information_retrievalnatural_language_processing