🤖 AI Summary
Reward modeling in RLHF faces critical bottlenecks—including high computational cost, expensive evaluation, and poor reproducibility—largely due to reliance on end-to-end fine-tuning of large language models (LLMs). Method: This paper proposes a lightweight reward modeling paradigm based on precomputed embeddings. It systematically demonstrates that frozen LLM-generated text embeddings can fully replace raw text inputs without performance degradation. The approach integrates embedding reuse, contrastive learning, reward ensembling, and offline caching, and introduces a gradient-free evaluation protocol. Contribution/Results: On mainstream benchmarks, it successfully reproduces state-of-the-art reward ensembling results while reducing training and evaluation costs by over 90%. All computation completes in under one hour on a single CPU machine, matching GPU-based baseline performance. This enables low-cost, cross-laboratory reproducible reward model validation—significantly advancing accessibility and scientific rigor in RLHF research.
📝 Abstract
Large Language Models (LLMs) have made substantial strides in structured tasks through Reinforcement Learning (RL), demonstrating proficiency in mathematical reasoning and code generation. However, applying RL in broader domains like chatbots and content generation -- through the process known as Reinforcement Learning from Human Feedback (RLHF) -- presents unique challenges. Reward models in RLHF are critical, acting as proxies that evaluate the alignment of LLM outputs with human intent. Despite advancements, the development of reward models is hindered by challenges such as computational heavy training, costly evaluation, and therefore poor reproducibility. We advocate for using embedding-based input in reward model research as an accelerated solution to those challenges. By leveraging embeddings for reward modeling, we can enhance reproducibility, reduce computational demands on hardware, improve training stability, and significantly reduce training and evaluation costs, hence facilitating fair and efficient comparisons in this active research area. We then show a case study of reproducing existing reward model ensemble research using embedding-based reward models. We discussed future avenues for research, aiming to contribute to safer and more effective LLM deployments.