🤖 AI Summary
This work addresses the limited contextual understanding and poor out-of-distribution (OOD) generalization of language-model-based discriminative reward modeling. To this end, we propose a unified framework integrating natural language inference (NLI) with reward modeling. Our core innovation is an interpretable slot-prediction mechanism coupled with a two-stage masked language model, which explicitly encodes contextual explanations into structured semantic slots—thereby enhancing the model’s capacity for deep semantic reasoning in preference judgment. This design significantly improves both the stability of reward signals and OOD generalization performance. Empirical results demonstrate that our method consistently outperforms state-of-the-art generative and discriminative reward models across RLHF benchmarks and diverse OOD settings. By grounding reward estimation in explainable, structured semantics, our approach establishes a novel paradigm for trustworthy and robust alignment learning.
📝 Abstract
The emergence of LM-based judging reward modeling, represented by generative reward models, has successfully made reinforcement learning from AI feedback (RLAIF) efficient and scalable. To further advance this paradigm, we propose a core insight: this form of reward modeling shares fundamental formal consistency with natural language inference (NLI), a core task in natural language understanding. This reframed perspective points to a key path for building superior reward models: scaling the model's comprehension boundaries. Pursuing this path, exploratory experiments on NLI tasks demonstrate that the slot prediction masked language models (MLMs) incorporating contextual explanations achieve significantly better performance compared to mainstream autoregressive models. Based on this key finding, we propose ESFP-RM, a two-stage LM-based judging reward model that utilizes an explanation based slot framework for prediction to fully leverage the advantages of MLMs. Extensive experiments demonstrate that in both reinforcement learning from human feedback (RLHF) and out-of-distribution (OOD) scenarios, the ESFP-RM framework delivers more stable and generalizable reward signals compared to generative reward models.