Process Reinforcement through Implicit Rewards

📅 2025-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address key bottlenecks in LLM reinforcement learning—including the difficulty of online training for process reward models (PRMs), high human annotation costs, and vulnerability to reward hacking—this paper proposes PRIME, an implicit process reward modeling framework that requires no explicit process-level annotations. PRIME eliminates the need for a separate PRM training phase by dynamically updating the PRM online using only policy rollouts and final outcome labels, leveraging self-supervised rollout signals and ensemble-based advantage estimation. It seamlessly integrates with standard RLHF pipelines and is implemented atop Qwen2.5-Math-7B. Experiments demonstrate that PRIME achieves an average 15.1% improvement across seven mathematical and code reasoning benchmarks. Moreover, its lightweight variant, Eurus-2-7B-PRIME, surpasses Qwen2.5-Math-7B-Instruct across all benchmarks using only 10% of the training data.

Technology Category

Application Category

📝 Abstract
Dense process rewards have proven a more effective alternative to the sparse outcome-level rewards in the inference-time scaling of large language models (LLMs), particularly in tasks requiring complex multi-step reasoning. While dense rewards also offer an appealing choice for the reinforcement learning (RL) of LLMs since their fine-grained rewards have the potential to address some inherent issues of outcome rewards, such as training efficiency and credit assignment, this potential remains largely unrealized. This can be primarily attributed to the challenges of training process reward models (PRMs) online, where collecting high-quality process labels is prohibitively expensive, making them particularly vulnerable to reward hacking. To address these challenges, we propose PRIME (Process Reinforcement through IMplicit rEwards), which enables online PRM updates using only policy rollouts and outcome labels through implict process rewards. PRIME combines well with various advantage functions and forgoes the dedicated reward model training phrase that existing approaches require, substantially reducing the development overhead. We demonstrate PRIME's effectiveness on competitional math and coding. Starting from Qwen2.5-Math-7B-Base, PRIME achieves a 15.1% average improvement across several key reasoning benchmarks over the SFT model. Notably, our resulting model, Eurus-2-7B-PRIME, surpasses Qwen2.5-Math-7B-Instruct on seven reasoning benchmarks with 10% of its training data.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Complex Thinking Tasks
Mathematical and Programming Problem Solving
Innovation

Methods, ideas, or system contributions that make the work stand out.

PRIME method
Simplified information update
Enhanced performance with less data
🔎 Similar Papers
No similar papers found.
Ganqu Cui
Ganqu Cui
Shanghai AI Lab
LLM AlignmentReinforcement Learning
Lifan Yuan
Lifan Yuan
University of Illinois Urbana-Champaign
Natural Language ProcessingMachine Learning
Zefan Wang
Zefan Wang
Tsinghua University
machine learning
Hanbin Wang
Hanbin Wang
Peking University
Natural Language ProcessingCode IntelligenceInformation Retrieval
Wendi Li
Wendi Li
PhD at UW-Madison
Bingxiang He
Bingxiang He
Second year PhD Candidate, Tsinghua University
Natural Language Processing
Yuchen Fan
Yuchen Fan
Shanghai AI Laboratory & Shanghai Jiao Tong University
NLPLarge Language ModelsEvaluation
Tianyu Yu
Tianyu Yu
Tsinghua University
multi-modal learning
Qixin Xu
Qixin Xu
Undergraduate of Computer Science, Tsinghua University
Multi-Modal LearningReinforcement Learning
Weize Chen
Weize Chen
Tsinghua University
NLPML
J
Jiarui Yuan
Tsinghua University
Huayu Chen
Huayu Chen
Tsinghua University
Reinforcement LearningDeep Generative ModelsMachine Learning
Kaiyan Zhang
Kaiyan Zhang
Tsinghua University
Foundation ModelCollective IntelligenceScientific Intelligence
Xingtai Lv
Xingtai Lv
Tsinghua University
Large Language ModelNatural Language Processing
S
Shuo Wang
Tsinghua University
Y
Yuan Yao
Tsinghua University
X
Xu Han
Tsinghua University
H
Hao Peng
University of Illinois Urbana-Champaign
Y
Yu Cheng
Shanghai AI Lab
Z
Zhiyuan Liu
Tsinghua University
Maosong Sun
Maosong Sun
Professor of Computer Science and Technology, Tsinghua University
Natural Language ProcessingArtificial IntelligenceSocial Computing
B
Bowen Zhou
Tsinghua University, Shanghai AI Lab
N
Ning Ding
Tsinghua University