Reward Models are Metrics in a Trench Coat

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current research on reward modeling and evaluation metrics operates in silos, leading to terminological redundancy, spurious correlations, heightened reward hacking risks, and duplicated efforts in data quality optimization and meta-evaluation. Through a systematic literature review and comparative analysis, we reveal that both reward models and evaluation metrics fundamentally serve the same purpose in language model post-training: preference modeling and performance calibration. Building on this insight, we propose a unified research framework integrating three core directions—preference acquisition, spurious correlation mitigation, and meta-evaluation calibration. Empirical experiments demonstrate that certain evaluation metrics significantly outperform existing reward models on specific tasks. This work clarifies the root causes of conceptual ambiguity in the field and fosters cross-paradigm collaboration, providing both theoretical foundations and practical pathways for developing robust, interpretable, and reusable alignment evaluation systems.

Technology Category

Application Category

📝 Abstract
The emergence of reinforcement learning in post-training of large language models has sparked significant interest in reward models. Reward models assess the quality of sampled model outputs to generate training signals. This task is also performed by evaluation metrics that monitor the performance of an AI model. We find that the two research areas are mostly separate, leading to redundant terminology and repeated pitfalls. Common challenges include susceptibility to spurious correlations, impact on downstream reward hacking, methods to improve data quality, and approaches to meta-evaluation. Our position paper argues that a closer collaboration between the fields can help overcome these issues. To that end, we show how metrics outperform reward models on specific tasks and provide an extensive survey of the two areas. Grounded in this survey, we point to multiple research topics in which closer alignment can improve reward models and metrics in areas such as preference elicitation methods, avoidance of spurious correlations and reward hacking, and calibration-aware meta-evaluation.
Problem

Research questions and friction points this paper is trying to address.

Reward models and evaluation metrics face redundant terminology issues
Both fields struggle with spurious correlations and reward hacking
Closer collaboration could improve preference elicitation and meta-evaluation methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reward models evaluate model output quality
Metrics outperform reward models on tasks
Aligning metrics and rewards improves performance