The Power of Active Multi-Task Learning in Reinforcement Learning from Human Feedback

📅 2024-05-18
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Human annotation in RLHF is prohibitively expensive, and multitask learning suffers from low sample efficiency. Method: We propose a task-correlation-driven active multitask sampling framework for RLHF, modeling it as a contextual dueling bandit problem with shared linear representations; annotation budgets are dynamically allocated across tasks based on estimated semantic task correlations. Contribution/Results: We jointly optimize task correlation estimation and active sampling—first such formulation—providing theoretical guarantees: under an ε-optimal policy, total source-task sample complexity is significantly reduced, while target-task sample complexity achieves O(d), linear in the latent space dimension d, breaking the O(d²) barrier of uniform sampling. Our approach substantially improves data efficiency and provides a scalable theoretical and practical pathway for resource-constrained RLHF.

Technology Category

Application Category

📝 Abstract
Reinforcement learning from human feedback (RLHF) has contributed to performance improvements in large language models. To tackle its reliance on substantial amounts of human-labeled data, a successful approach is multi-task representation learning, which involves learning a high-quality, low-dimensional representation from a wide range of source tasks. In this paper, we formulate RLHF as the contextual dueling bandit problem and assume a common linear representation. We demonstrate that the sample complexity of source tasks in multi-task RLHF can be reduced by considering task relevance and allocating different sample sizes to source tasks with varying task relevance. We further propose an algorithm to estimate task relevance by a small number of additional data and then learn a policy. We prove that to achieve $varepsilon-$optimal, the sample complexity of the source tasks can be significantly reduced compared to uniform sampling. Additionally, the sample complexity of the target task is only linear in the dimension of the latent space, thanks to representation learning.
Problem

Research questions and friction points this paper is trying to address.

Reduces reliance on human-labeled data in RLHF
Improves sample efficiency by task relevance allocation
Proposes algorithm for task relevance estimation and policy learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-task representation learning reduces data reliance.
Algorithm estimates task relevance with minimal data.
Sample complexity reduced via task relevance allocation.
🔎 Similar Papers
No similar papers found.
R
Ruitao Chen
Peking University
L
Liwei Wang
Peking University