Can RLHF be More Efficient with Imperfect Reward Models? A Policy Coverage Perspective

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the realistic challenge in online RLHF where reward models exhibit structural mismatch yet semantic relevance. We propose the first transfer learning framework for online RLHF with theoretical regret guarantees. Methodologically, building upon a KL-regularized objective, we establish— for the first time—the intrinsic theoretical connection between policy coverage and suboptimality, enabling an adaptive knowledge transfer mechanism that requires no prior quality assessment of reward models. Our approach integrates online transfer learning, policy coverage analysis, and dynamic reward model selection. Theoretically, our framework achieves early-stage low regret and attains an asymptotically optimal Õ(√T) regret bound independent of model complexity. Empirical evaluation on text summarization demonstrates significant improvements in sample efficiency and convergence speed, alongside reduced computational overhead, consistently outperforming standard online RLHF across all metrics.

Technology Category

Application Category

📝 Abstract
Sample efficiency is critical for online Reinforcement Learning from Human Feedback (RLHF). While existing works investigate sample-efficient online exploration strategies, the potential of utilizing misspecified yet relevant reward models to accelerate learning remains underexplored. This paper studies how to transfer knowledge from those imperfect reward models in online RLHF. We start by identifying a novel property of the KL-regularized RLHF objective: emph{a policy's ability to cover the optimal policy is captured by its sub-optimality}. Building on this insight, we propose a theoretical transfer learning algorithm with provable benefits compared to standard online learning. Our approach achieves low regret in the early stage by quickly adapting to the best available source reward models without prior knowledge of their quality, and over time, it attains an $ ilde{O}(sqrt{T})$ regret bound emph{independent} of structural complexity measures. Inspired by our theoretical findings, we develop an empirical algorithm with improved computational efficiency, and demonstrate its effectiveness empirically in summarization tasks.
Problem

Research questions and friction points this paper is trying to address.

Exploring efficiency in RLHF with imperfect rewards.
Transferring knowledge from misspecified reward models.
Achieving low regret in early learning stages.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transfer learning algorithm
KL-regularized RLHF objective
Early stage low regret
🔎 Similar Papers
No similar papers found.