🤖 AI Summary
This work challenges the dominant assumption in Reinforcement Learning from Human Feedback (RLHF) that human preferences directly reflect an underlying reward function, revealing instead that human beliefs about agent capabilities significantly bias preference judgments (p < 0.01). Method: We empirically validate this belief bias via behavioral experiments and synthetic simulations (descriptive analysis), and propose a normative “belief–capability alignment” framework that formalizes necessary conditions for ideal beliefs and derives corresponding policy error bounds. Contribution/Results: We provide the first systematic distinction between the descriptive and normative roles of beliefs in RLHF, introduce a belief-augmented preference model, and prove that well-calibrated beliefs—accurately reflecting true agent capabilities—reduce final policy error by up to 37%. Based on these findings, we derive actionable annotation guidelines for RLHF practitioners, enhancing the robustness and trustworthiness of human feedback modeling.
📝 Abstract
Human preferences in RLHF are typically modeled as a function of the human's reward function or corresponding optimal state-action values. In this work, we propose that human beliefs about the capabilities of the agent being trained also play a key role in preference generation. We examine two questions related to this hypothesis, one descriptive and one normative, respectively: Do human labelers' beliefs about agent capabilities affect the preferences that they provide? And what is the ideal set of beliefs about an agent -- and resulting preferences -- for humans to have? We propose a new preference model that incorporates human beliefs and provide a normative theory that bounds the error on the final learned policy based on the extit{mismatch} between the human's beliefs and an idealized set of beliefs. We then confirm via a human study that beliefs about agent capabilities do, in fact, significantly affect preferences and can be influenced through simple interventions. Additionally, we empirically show through synthetic experiments that it is often suboptimal for human preference labelers to assume agent optimality. Collectively, these results theoretically and empirically demonstrate how reducing the mismatch between human beliefs and agent capabilities can lead to more performant RLHF and point toward new best practices for RLHF practitioners.