PaTaRM: Bridging Pairwise and Pointwise Signals via Preference-Aware Task-Adaptive Reward Modeling

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing generative reward models (GRMs) face a fundamental trade-off: pairwise methods rely on binary preferences, leading to misalignment with pointwise inference, while pointwise methods require costly absolute annotations and exhibit poor generalization. This paper proposes PaTaRM, the first framework unifying pairwise and pointwise modeling—leveraging a preference-aware mechanism to transform pairwise comparisons into robust pointwise signals, coupled with task-adaptive dynamic rubric generation, eliminating the need for explicit absolute labels. Its core innovations lie in jointly modeling relative preference learning, dynamic criterion adaptation, and fine-grained consistency. Evaluated on Qwen3-8B/14B, PaTaRM achieves average improvements of 4.7% on RewardBench and RMBench, and 13.6% on RLHF downstream tasks (IFEval + InFoBench). It significantly enhances alignment efficiency, cross-task generalization, and annotation cost-effectiveness.

Technology Category

Application Category

📝 Abstract
Reward models (RMs) are central to reinforcement learning from human feedback (RLHF), providing the critical supervision signals that align large language models (LLMs) with human preferences. While generative reward models (GRMs) offer greater interpretability than traditional scalar RMs, current training paradigms remain limited. Pair-wise methods rely on binary good-versus-bad labels, which cause mismatches for point-wise inference and necessitate complex pairing strategies for effective application in RLHF. On the other hand, point-wise methods require more elaborate absolute labeling with rubric-driven criteria, resulting in poor adaptability and high annotation costs. In this work, we propose the Preference-Aware Task-Adaptive Reward Model (PaTaRM), a unified framework that integrates a preference-aware reward (PAR) mechanism with dynamic rubric adaptation. PaTaRM leverages relative preference information from pairwise data to construct robust point-wise training signals, eliminating the need for explicit point-wise labels. Simultaneously, it employs a task-adaptive rubric system that flexibly generates evaluation criteria for both global task consistency and instance-specific fine-grained reasoning. This design enables efficient, generalizable, and interpretable reward modeling for RLHF. Extensive experiments show that PaTaRM achieves an average relative improvement of 4.7% on RewardBench and RMBench across Qwen3-8B and Qwen3-14B models. Furthermore, PaTaRM boosts downstream RLHF performance, with an average improvement of 13.6% across IFEval and InFoBench benchmarks, confirming its effectiveness and robustness. Our code is available at https://github.com/JaneEyre0530/PaTaRM.
Problem

Research questions and friction points this paper is trying to address.

Unifying pairwise and pointwise reward modeling approaches
Reducing annotation costs while maintaining reward model adaptability
Improving interpretability and generalization in human feedback alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unifies pairwise and pointwise training via preference-aware reward mechanism
Employs dynamic rubric adaptation for task-specific evaluation criteria
Generates point-wise training signals without explicit absolute labels
A
Ai Jian
Beijing University of Posts and Telecommunications, Beijing, China
J
Jingqing Ruan
Meituan, Beijing, China
Xing Ma
Xing Ma
Meituan, NLP engineer
Dialog SystemLarge Language ModelConversation Analysis
D
Dailin Li
Meituan, Beijing, China
Q
Qianlin Zhou
Meituan, Beijing, China
K
Ke Zeng
Meituan, Beijing, China
X
Xunliang Cai
Meituan, Beijing, China