Uncertainty Quantification for Large Language Model Reward Learning under Heterogeneous Human Feedback

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the reliability of reward modeling for large language models (LLMs) under heterogeneous human feedback. Methodologically, it formulates reward learning as a biconvex optimization problem and proposes the first heterogeneous preference framework jointly modeling answer rewards and human rational preferences. An alternating gradient descent algorithm is designed for efficient optimization, and—grounded in statistical learning theory—the asymptotic normality of the reward estimator is established, yielding interpretable confidence intervals for reward estimates and enabling statistically principled cross-model comparisons. Empirically, the method significantly improves uncertainty quantification accuracy on both synthetic and real-world LLM feedback datasets, enhances the robustness of Best-of-N sampling, and provides both theoretical guarantees and practical tools for trustworthy LLM alignment.

Technology Category

Application Category

📝 Abstract
We study estimation and statistical inference for reward models used in aligning large language models (LLMs). A key component of LLM alignment is reinforcement learning from human feedback (RLHF), where humans compare pairs of model-generated answers and their preferences are used to train a reward model. However, human feedback is inherently heterogeneous, creating significant challenges for reliable reward learning. To address this, we adopt a heterogeneous preference framework that jointly models the latent reward of answers and human rationality. This leads to a challenging biconvex optimization problem, which we solve via an alternating gradient descent algorithm. We establish theoretical guarantees for the resulting estimator, including its convergence and asymptotic distribution. These results enable the construction of confidence intervals for reward estimates. Leveraging these uncertainty quantification results, we conduct valid statistical comparisons between rewards and incorporate uncertainty into the best-of-$N$ (BoN) policy framework. Extensive simulations demonstrate the effectiveness of our method, and applications to real LLM data highlight the practical value of accounting for uncertainty in reward modeling for LLM alignment.
Problem

Research questions and friction points this paper is trying to address.

Addresses uncertainty quantification in reward learning from heterogeneous human feedback.
Develops statistical inference methods for aligning large language models via RLHF.
Enables confidence intervals and uncertainty-aware policy comparisons for reward models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Heterogeneous preference framework modeling latent reward and human rationality
Alternating gradient descent algorithm solving biconvex optimization problem
Uncertainty quantification enabling confidence intervals and statistical comparisons
🔎 Similar Papers
No similar papers found.
P
Pangpang Liu
Department of Biostatistics, Yale University
J
Junwei Lu
Department of Biostatistics, Harvard University
Will Wei Sun
Will Wei Sun
Associate Professor, Daniels School of Business, Purdue University
Machine LearningStatistics