Inference-Time Scaling for Generalist Reward Modeling

📅 2025-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor scalability and weak cross-domain generalization of generic reward models (RMs) during inference, this paper proposes Self-Principled Critique Tuning (SPCT), an inference-time computation-augmented reward modeling paradigm. SPCT integrates generative pairwise reward modeling (GRM), adaptive principle-driven critical feedback generation, parallel multi-path sampling, and a meta-reward model ensemble voting mechanism—achieving substantial gains in inference quality and robustness without increasing training overhead. Experiments demonstrate that SPCT consistently outperforms existing training-time scaling methods across multiple benchmarks, offering superior inference efficiency, reduced bias, and strong domain adaptability. The open-sourced DeepSeek-GRM model series empirically validates SPCT’s effectiveness and practical utility.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has been widely adopted in post-training for large language models (LLMs) at scale. Recently, the incentivization of reasoning capabilities in LLMs from RL indicates that $ extit{proper learning methods could enable effective inference-time scalability}$. A key challenge of RL is to obtain accurate reward signals for LLMs in various domains beyond verifiable questions or artificial rules. In this work, we investigate how to improve reward modeling (RM) with more inference compute for general queries, i.e. the $ extbf{inference-time scalability of generalist RM}$, and further, how to improve the effectiveness of performance-compute scaling with proper learning methods. For the RM approach, we adopt pointwise generative reward modeling (GRM) to enable flexibility for different input types and potential for inference-time scaling. For the learning method, we propose Self-Principled Critique Tuning (SPCT) to foster scalable reward generation behaviors in GRMs through online RL, to generate principles adaptively and critiques accurately, resulting in $ extbf{DeepSeek-GRM}$ models. Furthermore, for effective inference-time scaling, we use parallel sampling to expand compute usage, and introduce a meta RM to guide voting process for better scaling performance. Empirically, we show that SPCT significantly improves the quality and scalability of GRMs, outperforming existing methods and models in various RM benchmarks without severe biases, and could achieve better performance compared to training-time scaling. DeepSeek-GRM still meets challenges in some tasks, which we believe can be addressed by future efforts in generalist reward systems. The models will be released and open-sourced.
Problem

Research questions and friction points this paper is trying to address.

Improving reward modeling for general queries using inference compute
Enhancing performance-compute scaling with effective learning methods
Addressing challenges in accurate reward signals for diverse domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pointwise generative reward modeling for flexibility
Self-Principled Critique Tuning for scalable rewards
Parallel sampling and meta RM for better scaling
🔎 Similar Papers
No similar papers found.
Zijun Liu
Zijun Liu
Tsinghua University
LLMAgentMachine TranslationAIGC
P
Peiyi Wang
DeepSeek-AI
Runxin Xu
Runxin Xu
DeepSeek AI | Peking University
Natural Language Processing
Shirong Ma
Shirong Ma
Tsinghua University
C
Chong Ruan
DeepSeek-AI
P
Peng Li
Institute for AI Industry Research (AIR), Tsinghua University
Y
Yang Liu
Dept. of Computer Sci. & Tech., Tsinghua University, Institute for AI Industry Research (AIR), Tsinghua University
Yu Wu
Yu Wu
University of Cambridge
machine learninghealth sensingmobile health