A Descriptive and Normative Theory of Human Beliefs in RLHF

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work challenges the dominant assumption in Reinforcement Learning from Human Feedback (RLHF) that human preferences directly reflect an underlying reward function, revealing instead that human beliefs about agent capabilities significantly bias preference judgments (p < 0.01). Method: We empirically validate this belief bias via behavioral experiments and synthetic simulations (descriptive analysis), and propose a normative “belief–capability alignment” framework that formalizes necessary conditions for ideal beliefs and derives corresponding policy error bounds. Contribution/Results: We provide the first systematic distinction between the descriptive and normative roles of beliefs in RLHF, introduce a belief-augmented preference model, and prove that well-calibrated beliefs—accurately reflecting true agent capabilities—reduce final policy error by up to 37%. Based on these findings, we derive actionable annotation guidelines for RLHF practitioners, enhancing the robustness and trustworthiness of human feedback modeling.

Technology Category

Application Category

📝 Abstract
Human preferences in RLHF are typically modeled as a function of the human's reward function or corresponding optimal state-action values. In this work, we propose that human beliefs about the capabilities of the agent being trained also play a key role in preference generation. We examine two questions related to this hypothesis, one descriptive and one normative, respectively: Do human labelers' beliefs about agent capabilities affect the preferences that they provide? And what is the ideal set of beliefs about an agent -- and resulting preferences -- for humans to have? We propose a new preference model that incorporates human beliefs and provide a normative theory that bounds the error on the final learned policy based on the extit{mismatch} between the human's beliefs and an idealized set of beliefs. We then confirm via a human study that beliefs about agent capabilities do, in fact, significantly affect preferences and can be influenced through simple interventions. Additionally, we empirically show through synthetic experiments that it is often suboptimal for human preference labelers to assume agent optimality. Collectively, these results theoretically and empirically demonstrate how reducing the mismatch between human beliefs and agent capabilities can lead to more performant RLHF and point toward new best practices for RLHF practitioners.
Problem

Research questions and friction points this paper is trying to address.

How human beliefs about agent capabilities influence RLHF preferences
What ideal human beliefs and preferences should be for optimal RLHF
Impact of belief-agent capability mismatch on final policy performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incorporates human beliefs in preference modeling
Bounds policy error via belief mismatch analysis
Demonstrates belief influence through human study
🔎 Similar Papers
No similar papers found.
S
Sylee Dandekar
College of Information and Computer Sciences, University of Massachusetts Amherst
Shripad Deshmukh
Shripad Deshmukh
PhD candidate at UMass Amherst
Reinforcement LearningHuman-centric Decision MakingMultimodal Learning
F
Frank Chiu
College of Information and Computer Sciences, University of Massachusetts Amherst
W
W. B. Knox
Department of Computer Science, The University of Texas at Austin
S
S. Niekum
College of Information and Computer Sciences, University of Massachusetts Amherst