Contextual Online Uncertainty-Aware Preference Learning for Human Feedback

📅 2025-04-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In RLHF, online human preference collection under dynamic contexts suffers from coupled decision-making and statistical inference challenges. This paper proposes an uncertainty-aware two-stage online learning framework: the first stage employs an ε-greedy policy to explore preference structures, while the second stage exploits learned knowledge for efficient decision-making. Theoretically, we establish, for the first time, unified estimation rates and asymptotic normality guarantees for dependent preference data—integrating matrix martingale concentration inequalities, anti-concentration bounds, and dynamic context modeling techniques. Empirically, our method significantly outperforms state-of-the-art baselines in simulations. On the MMLU medical anatomy subset, it precisely captures fine-grained preference differences among large language models, demonstrating both statistical reliability and decision-making efficacy.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning from Human Feedback (RLHF) has become a pivotal paradigm in artificial intelligence to align large models with human preferences. In this paper, we propose a novel statistical framework to simultaneously conduct the online decision-making and statistical inference on the optimal model using human preference data based on dynamic contextual information. Our approach introduces an efficient decision strategy that achieves both the optimal regret bound and the asymptotic distribution of the estimators. A key challenge in RLHF is handling the dependent online human preference outcomes with dynamic contexts. To address this, in the methodological aspect, we propose a two-stage algorithm starting with $epsilon$-greedy followed by exploitations; in the theoretical aspect, we tailor anti-concentration inequalities and matrix martingale concentration techniques to derive the uniform estimation rate and asymptotic normality of the estimators using dependent samples from both stages. Extensive simulation results demonstrate that our method outperforms state-of-the-art strategies. We apply the proposed framework to analyze the human preference data for ranking large language models on the Massive Multitask Language Understanding dataset, yielding insightful results on the performance of different large language models for medical anatomy knowledge.
Problem

Research questions and friction points this paper is trying to address.

Online decision-making with dynamic human preference data
Optimal regret bound and estimator asymptotic distribution
Handling dependent online outcomes in RLHF with dynamic contexts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage algorithm with ε-greedy and exploitations
Anti-concentration inequalities for uniform estimation
Matrix martingale techniques for dependent samples
🔎 Similar Papers
No similar papers found.