Active teacher selection for reinforcement learning from human feedback

📅 2023-10-23
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
Existing RLHF methods typically assume feedback from a single teacher, neglecting heterogeneity in rationality, expertise, and annotation cost across multiple teachers—limiting reward model performance. This paper introduces the Hidden Utility Bandit (HUB) framework, the first to formally model teacher utility as latent variables, jointly characterizing rationality, expertise, and cost trade-offs. We further propose Active Teacher Selection (ATS), a Bayesian optimization–based algorithm grounded in extensive-form multi-armed bandits, enabling dynamic, strategic teacher scheduling. Evaluated on real-world tasks—paper recommendation and COVID-19 vaccine testing—ATS outperforms static and random teacher selection baselines, improving reward model accuracy by 12.7%–23.4% while reducing annotation cost by 38.5%. These results demonstrate HUB’s dual advantages in sample efficiency and generalization across diverse teacher populations.
📝 Abstract
Reinforcement learning from human feedback (RLHF) enables machine learning systems to learn objectives from human feedback. A core limitation of these systems is their assumption that all feedback comes from a single human teacher, despite querying a range of distinct teachers. We propose the Hidden Utility Bandit (HUB) framework to model differences in teacher rationality, expertise, and costliness, formalizing the problem of learning from multiple teachers. We develop a variety of solution algorithms and apply them to two real-world domains: paper recommendation systems and COVID-19 vaccine testing. We find that the Active Teacher Selection (ATS) algorithm outperforms baseline algorithms by actively selecting when and which teacher to query. The HUB framework and ATS algorithm demonstrate the importance of leveraging differences between teachers to learn accurate reward models, facilitating future research on active teacher selection for robust reward modeling.
Problem

Research questions and friction points this paper is trying to address.

Modeling differences in teacher rationality, expertise, and costliness for RLHF
Learning accurate reward models by actively selecting which teacher to query
Improving reward modeling in domains like recommendation systems and vaccine testing
Innovation

Methods, ideas, or system contributions that make the work stand out.

HUB framework models teacher differences
ATS algorithm selects teachers actively
Applied to recommendation and vaccine testing
🔎 Similar Papers
No similar papers found.