🤖 AI Summary
Implicit reward models (IM-RMs) exhibit significantly worse in-distribution and out-of-distribution generalization than explicit reward models (EX-RMs), despite sharing identical language model architectures, training data, and loss functions—differing only in how rewards are computed.
Method: Through theoretical analysis and systematic empirical evaluation, we identify that IM-RMs over-rely on superficial token-level cues, rendering them highly sensitive to token distribution shifts and fundamentally limiting their generalization capacity.
Contribution/Results: This work is the first to establish the modeling paradigm—implicit versus explicit—as the root cause of the generalization gap, directly challenging the intuitive assumption that “generation being harder than discrimination inherently benefits discrimination.” We further demonstrate that seemingly minor design choices—particularly the form of reward output—exert decisive influence on generalization behavior. Our findings provide critical theoretical insights and practical guidelines for building robust, generalizable reward models.
📝 Abstract
Reward models are key to language model post-training and inference pipelines. Conveniently, recent work showed that every language model defines an implicit reward model (IM-RM), without requiring any architectural changes. However, such IM-RMs tend to generalize worse, especially out-of-distribution, compared to explicit reward models (EX-RMs) that apply a dedicated linear head over the hidden representations of a language model. The existence of a generalization gap is puzzling, as EX-RMs and IM-RMs are nearly identical. They can be trained using the same data, loss function, and language model, and differ only in how the reward is computed. Towards a fundamental understanding of the implicit biases underlying different reward model types, we investigate the root cause of this gap. Our main finding, backed by theory and experiments, is that IM-RMs rely more heavily on superficial token-level cues. Consequently, they often generalize worse than EX-RMs under token-level distribution shifts, as well as in-distribution. Furthermore, we provide evidence against alternative hypotheses for the generalization gap. Most notably, we challenge the intuitive claim that IM-RMs struggle in tasks where generation is harder than verification because they can operate both as a verifier and a generator. Taken together, our results highlight that seemingly minor design choices can substantially impact the generalization behavior of reward models.