🤖 AI Summary
In RLHF, human feedback often suffers from inaccuracy, instability, or systematic annotator bias, causing reward models to deviate from true human intent. To address this, we propose an influence-function-based framework for feedback attribution and optimization: the first to efficiently adapt Hessian-approximated influence functions to LLM reward modeling and large-scale preference datasets. Our method enables label-bias detection and expert-alignment guidance, facilitating active calibration of feedback quality. Integrating preference learning, bias diagnosis, and feedback refinement, it identifies annotator-level systematic biases across multiple RLHF benchmarks and improves reward model alignment with expert judgments by 12.7% in Kendall τ. This enhances both interpretability of human feedback and scalability of supervision—without requiring additional human annotation.
📝 Abstract
In Reinforcement Learning from Human Feedback (RLHF), it is crucial to learn suitable reward models from human feedback to align large language models (LLMs) with human intentions. However, human feedback can often be noisy, inconsistent, or biased, especially when evaluating complex responses. Such feedback can lead to misaligned reward signals, potentially causing unintended side effects during the RLHF process. To address these challenges, we explore the use of influence functions to measure the impact of human feedback on the performance of reward models. We propose a compute-efficient approximation method that enables the application of influence functions to LLM-based reward models and large-scale preference datasets. In our experiments, we demonstrate two key applications of influence functions: (1) detecting common forms of labeler bias in human feedback datasets and (2) guiding labelers to refine their strategies to align more closely with expert feedback. By quantifying the impact of human feedback on reward models, we believe that influence functions can enhance feedback interpretability and contribute to scalable oversight in RLHF, helping labelers provide more accurate and consistent feedback. Source code is available at https://github.com/mintaywon/IF_RLHF