🤖 AI Summary
Existing generative reward models (GRMs) face a fundamental trade-off: pairwise methods rely on binary preferences, leading to misalignment with pointwise inference, while pointwise methods require costly absolute annotations and exhibit poor generalization. This paper proposes PaTaRM, the first framework unifying pairwise and pointwise modeling—leveraging a preference-aware mechanism to transform pairwise comparisons into robust pointwise signals, coupled with task-adaptive dynamic rubric generation, eliminating the need for explicit absolute labels. Its core innovations lie in jointly modeling relative preference learning, dynamic criterion adaptation, and fine-grained consistency. Evaluated on Qwen3-8B/14B, PaTaRM achieves average improvements of 4.7% on RewardBench and RMBench, and 13.6% on RLHF downstream tasks (IFEval + InFoBench). It significantly enhances alignment efficiency, cross-task generalization, and annotation cost-effectiveness.
📝 Abstract
Reward models (RMs) are central to reinforcement learning from human feedback (RLHF), providing the critical supervision signals that align large language models (LLMs) with human preferences. While generative reward models (GRMs) offer greater interpretability than traditional scalar RMs, current training paradigms remain limited. Pair-wise methods rely on binary good-versus-bad labels, which cause mismatches for point-wise inference and necessitate complex pairing strategies for effective application in RLHF. On the other hand, point-wise methods require more elaborate absolute labeling with rubric-driven criteria, resulting in poor adaptability and high annotation costs. In this work, we propose the Preference-Aware Task-Adaptive Reward Model (PaTaRM), a unified framework that integrates a preference-aware reward (PAR) mechanism with dynamic rubric adaptation. PaTaRM leverages relative preference information from pairwise data to construct robust point-wise training signals, eliminating the need for explicit point-wise labels. Simultaneously, it employs a task-adaptive rubric system that flexibly generates evaluation criteria for both global task consistency and instance-specific fine-grained reasoning. This design enables efficient, generalizable, and interpretable reward modeling for RLHF. Extensive experiments show that PaTaRM achieves an average relative improvement of 4.7% on RewardBench and RMBench across Qwen3-8B and Qwen3-14B models. Furthermore, PaTaRM boosts downstream RLHF performance, with an average improvement of 13.6% across IFEval and InFoBench benchmarks, confirming its effectiveness and robustness. Our code is available at https://github.com/JaneEyre0530/PaTaRM.