🤖 AI Summary
Existing RLHF paradigms rely on a single aggregated reward model, which fails to capture the diversity and heterogeneity of human values and systematically marginalizes minority preferences. This work proposes a reflective-dialogue-based personalized reward modeling framework: a language model guides users through structured value reflection to elicit critical feedback and preference demonstrations; this context then drives a second language model to construct an individualized verbal reward model. The approach enables multi-value alignment, enhances representation of minority preferences, and improves model fairness. In a 30-participant user study, the reflective verbal reward model achieves 9–12% higher accuracy than its non-reflective counterpart and demonstrates significantly superior sample efficiency compared to conventional supervised learning methods.
📝 Abstract
AI agents are commonly aligned with "human values" through reinforcement learning from human feedback (RLHF), where a single reward model is learned from aggregated human feedback and used to align an agent's behavior. However, human values are not homogeneous--different people hold distinct and sometimes conflicting values. Aggregating feedback into a single reward model risks disproportionately suppressing minority preferences. To address this, we present a novel reward modeling approach for learning individualized reward models. Our approach uses a language model to guide users through reflective dialogues where they critique agent behavior and construct their preferences. This personalized dialogue history, containing the user's reflections and critiqued examples, is then used as context for another language model that serves as an individualized reward function (what we call a "verbal reward model") for evaluating new trajectories. In studies with 30 participants, our method achieved a 9-12% improvement in accuracy over non-reflective verbal reward models while being more sample efficient than traditional supervised learning methods.