🤖 AI Summary
This work investigates how reward models implicitly encode human values, focusing on critical biases—including scoring heterogeneity, prompt framing sensitivity, frequent-word preferences, and asymmetric response patterns between high- and low-scoring outputs. We propose the first explainability framework based on full-vocabulary traversal and conduct a systematic evaluation across ten mainstream open-source reward models, spanning diverse architectures and parameter scales. Our key findings are: (1) harmful training induces identity-group bias; (2) structural asymmetry exists between token representations associated with high versus low scores; and (3) substantial cross-model heterogeneity undermines the “model interchangeability” assumption. These results reveal fundamental limitations in using reward models as proxies for human values, challenging their reliability in alignment evaluation and safe deployment. The study provides both empirical evidence and a methodological foundation for rigorous reward modeling assessment.
📝 Abstract
Reward modeling has emerged as a crucial component in aligning large language models with human values. Significant attention has focused on using reward models as a means for fine-tuning generative models. However, the reward models themselves -- which directly encode human value judgments by turning prompt-response pairs into scalar rewards -- remain relatively understudied. We present a novel approach to reward model interpretability through exhaustive analysis of their responses across their entire vocabulary space. By examining how different reward models score every possible single-token response to value-laden prompts, we uncover several striking findings: (i) substantial heterogeneity between models trained on similar objectives, (ii) systematic asymmetries in how models encode high- vs low-scoring tokens, (iii) significant sensitivity to prompt framing that mirrors human cognitive biases, and (iv) overvaluation of more frequent tokens. We demonstrate these effects across ten recent open-source reward models of varying parameter counts and architectures. Our results challenge assumptions about the interchangeability of reward models, as well as their suitability as proxies of complex and context-dependent human values. We find that these models can encode concerning biases toward certain identity groups, which may emerge as unintended consequences of harmlessness training -- distortions that risk propagating through the downstream large language models now deployed to millions.