Data-adaptive Safety Rules for Training Reward Models

๐Ÿ“… 2025-01-26
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the challenge of fine-grained safety annotation in Reinforcement Learning from Human Feedback (RLHF), this paper proposes a data-adaptive safety rule selection mechanism that dynamically matches each response pair with the most discriminative safety rule, thereby enhancing the reward modelโ€™s fidelity to human preferences. Methodologically, we introduce the first rule-adaptive selection framework grounded in maximal response divergence, theoretically justified via mutual information analysis to maximize the association between rule-based annotations and true human preferences; we further achieve input-aware, dynamic scheduling of safety rulesโ€”the first such instantiation. Experiments employ mathematical optimization modeling and training of an 8B-parameter reward model, achieving state-of-the-art performance on the RewardBench safety leaderboard (2025-01-25), significantly outperforming larger baseline models.

Technology Category

Application Category

๐Ÿ“ Abstract
Reinforcement Learning from Human Feedback (RLHF) is commonly employed to tailor models to human preferences, especially to improve the safety of outputs from large language models (LLMs). Traditionally, this method depends on selecting preferred responses from pairs. However, due to the variability in human opinions and the challenges in directly comparing two responses, there is an increasing trend towards fine-grained annotation approaches that evaluate responses using multiple targeted metrics or rules. The challenge lies in efficiently choosing and applying these rules to handle the diverse range of preference data. In this paper, we propose a dynamic method that adaptively selects the most important rules for each response pair. We introduce a mathematical framework that utilizes the maximum discrepancy across paired responses and demonstrate theoretically that this approach maximizes the mutual information between the rule-based annotations and the underlying true preferences. We then train an 8B reward model using this adaptively labeled preference dataset and assess its efficacy using RewardBench. As of January 25, 2025, our model achieved the highest safety performance on the leaderboard, surpassing various larger models.
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning
Large Language Models
Safety Training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated Rule Selection
Safe Reinforcement Learning
Human Preference Modeling
๐Ÿ”Ž Similar Papers
No similar papers found.
X
Xiaomin Li
Harvard University
M
Mingye Gao
Massachusetts Institute of Technology
Z
Zhiwei Zhang
Pennsylvania State University
Jingxuan Fan
Jingxuan Fan
Harvard
systems neuroscienceLLM
Weiyu Li
Weiyu Li
The Hong Kong University of Science and Technology
Computer GraphicsNeural Rendering3D Content Generation