Multi-Agent Collaborative Reward Design for Enhancing Reasoning in Reinforcement Learning

📅 2025-11-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional reward models struggle to simultaneously satisfy conflicting multi-dimensional preferences—such as factual accuracy, helpfulness, and safety—while lacking interpretability in scalar reward scores. To address this, we propose the Collaborative Reward Model (CRM), a multi-agent framework that decomposes holistic preference into domain-specialized evaluators (e.g., fact-checking, safety detection). CRM synthesizes robust and interpretable multi-dimensional reward signals via pairwise ranking modeling, embedding similarity-based rewards, and centralized signal aggregation. Integrated seamlessly into standard RLHF pipelines, CRM combines Generalized Advantage Estimation (GAE) with value-function regression for stable policy optimization. Crucially, CRM requires no additional human annotations, enabling multi-perspective reward shaping directly from model-generated outputs. Evaluated on our curated benchmark rewardBench, CRM significantly improves training stability and downstream reasoning performance, empirically validating its modular architecture and strong generalization across preference dimensions.

Technology Category

Application Category

📝 Abstract
We present CRM (Multi-Agent Collaborative Reward Model), a framework that replaces a single black-box reward model with a coordinated team of specialist evaluators to improve robustness and interpretability in RLHF. Conventional reward models struggle to jointly optimize multiple, sometimes conflicting, preference dimensions (e.g., factuality, helpfulness, safety) and offer limited transparency into why a score is assigned. CRM addresses these issues by decomposing preference evaluation into domain-specific agents that each produce partial signals, alongside global evaluators such as ranker-based and embedding-similarity rewards. A centralized aggregator fuses these signals at each timestep, balancing factors like step-wise correctness, multi-agent agreement, and repetition penalties, yielding a single training reward compatible with standard RL pipelines. The policy is optimized with advantage-based updates (e.g., GAE), while a value model regresses to the aggregated reward, enabling multi-perspective reward shaping without requiring additional human annotations beyond those used to train the evaluators. To support training and assessment, we introduce rewardBench, a benchmark and training suite aligned with the collaborative structure of CRM. Together, CRM and rewardBench provide a practical, modular path to more transparent reward modeling and more stable optimization.
Problem

Research questions and friction points this paper is trying to address.

Replaces single reward model with specialist evaluators for robustness
Decomposes preference evaluation into domain-specific agents for transparency
Balances multiple conflicting preference dimensions like factuality and safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent framework replaces single reward model
Specialist evaluators decompose preference into domain signals
Centralized aggregator fuses signals for training reward
🔎 Similar Papers
No similar papers found.
P
Pei Yang
Gradient
K
Ke Zhang
Waseda University
J
Ji Wang
Columbia University
X
Xiao Chen
Hong Kong Polytechnic University
Y
Yuxin Tang
Rice University & Gradient Network
Eric Yang
Eric Yang
AI Scientist, Verily Life Sciences
L
Lynn Ai
Gradient
Bill Shi
Bill Shi
Applied Scientist
Graph AIComplex NetworksComputational Social Science