🤖 AI Summary
This work addresses the low data efficiency of reinforcement learning from human feedback (RLHF) by proposing an online learning algorithm that incrementally updates both the reward model and the language model. The approach integrates a REINFORCE variant, models reward uncertainty using cognitive-inspired neural networks, and incorporates information-directed exploration alongside affirmative fine-tuning signals. Evaluated on the Gemma large language model, the method achieves performance comparable to conventional offline RLHF trained on 200,000 human labels using fewer than 20,000 labels—demonstrating over a tenfold improvement in data efficiency. Extrapolation suggests that scaling to one million labels could match the performance of billion-label offline training regimes, substantially advancing the frontier of efficient alignment learning.
📝 Abstract
We develop an online learning algorithm that dramatically improves the data efficiency of reinforcement learning from human feedback (RLHF). Our algorithm incrementally updates reward and language models as choice data is received. The reward model is fit to the choice data, while the language model is updated by a variation of reinforce, with reinforcement signals provided by the reward model. Several features enable the efficiency gains: a small affirmative nudge added to each reinforcement signal, an epistemic neural network that models reward uncertainty, and information-directed exploration. With Gemma large language models (LLMs), our algorithm matches the performance of offline RLHF trained on 200K labels using fewer than 20K labels, representing more than a 10x gain in data efficiency. Extrapolating from our results, we expect our algorithm trained on 1M labels to match offline RLHF trained on 1B labels. This represents a 1,000x gain. To our knowledge, these are the first results to demonstrate that such large improvements are possible.