🤖 AI Summary
Existing reward model distillation methods treat generative large language models (LLMs) merely as binary annotators, failing to leverage their multidimensional capabilities and thereby limiting distillation performance. This work proposes RM-Distiller, a novel framework that systematically explores distilling reward models from generative LLMs by integrating their refinement, scoring, and generation capacities. Specifically, RM-Distiller constructs fine-grained contrastive signals, introduces a margin-aware optimization objective, and employs generative distribution regularization to preserve linguistic knowledge. Evaluated on standard reward modeling benchmarks and reinforcement learning alignment tasks, the proposed method significantly outperforms existing distillation approaches, demonstrating the effectiveness and superiority of synergistically harnessing the teacher LLM’s multifaceted abilities.
📝 Abstract
Reward models (RMs) play a pivotal role in aligning large language models (LLMs) with human preferences. Due to the difficulty of obtaining high-quality human preference annotations, distilling preferences from generative LLMs has emerged as a standard practice. However, existing approaches predominantly treat teacher models as simple binary annotators, failing to fully exploit the rich knowledge and capabilities for RM distillation. To address this, we propose RM-Distiller, a framework designed to systematically exploit the multifaceted capabilities of teacher LLMs: (1) Refinement capability, which synthesizes highly correlated response pairs to create fine-grained and contrastive signals. (2) Scoring capability, which guides the RM in capturing precise preference strength via a margin-aware optimization objective. (3) Generation capability, which incorporates the teacher's generative distribution to regularize the RM to preserve its fundamental linguistic knowledge. Extensive experiments demonstrate that RM-Distiller significantly outperforms traditional distillation methods both on RM benchmarks and reinforcement learning-based alignment, proving that exploiting multifaceted teacher capabilities is critical for effective reward modeling. To the best of our knowledge, this is the first systematic research on RM distillation from generative LLMs.