๐ค AI Summary
To address the challenge of fine-grained safety annotation in Reinforcement Learning from Human Feedback (RLHF), this paper proposes a data-adaptive safety rule selection mechanism that dynamically matches each response pair with the most discriminative safety rule, thereby enhancing the reward modelโs fidelity to human preferences. Methodologically, we introduce the first rule-adaptive selection framework grounded in maximal response divergence, theoretically justified via mutual information analysis to maximize the association between rule-based annotations and true human preferences; we further achieve input-aware, dynamic scheduling of safety rulesโthe first such instantiation. Experiments employ mathematical optimization modeling and training of an 8B-parameter reward model, achieving state-of-the-art performance on the RewardBench safety leaderboard (2025-01-25), significantly outperforming larger baseline models.
๐ Abstract
Reinforcement Learning from Human Feedback (RLHF) is commonly employed to tailor models to human preferences, especially to improve the safety of outputs from large language models (LLMs). Traditionally, this method depends on selecting preferred responses from pairs. However, due to the variability in human opinions and the challenges in directly comparing two responses, there is an increasing trend towards fine-grained annotation approaches that evaluate responses using multiple targeted metrics or rules. The challenge lies in efficiently choosing and applying these rules to handle the diverse range of preference data. In this paper, we propose a dynamic method that adaptively selects the most important rules for each response pair. We introduce a mathematical framework that utilizes the maximum discrepancy across paired responses and demonstrate theoretically that this approach maximizes the mutual information between the rule-based annotations and the underlying true preferences. We then train an 8B reward model using this adaptively labeled preference dataset and assess its efficacy using RewardBench. As of January 25, 2025, our model achieved the highest safety performance on the leaderboard, surpassing various larger models.