๐ค AI Summary
Existing safety alignment methods for large language models often treat safety and utility as a trade-off, making it difficult to simultaneously achieve high assurance and high usability in sensitive applications. This paper proposes a Reinforcement Learning from Human Feedback (RLHF) framework under high-confidence safety constraints, the first to incorporate Upper Confidence Bound (UCB) theory into RLHFโyielding a provable, user-controllable upper bound on safety failure probability. By decoupling preference modeling, we design a two-stage mechanism: pessimistic cost-constrained pre-optimization followed by post-hoc safety verification. Our method employs a dual-architecture reward model and cost model, supported by theoretical error-bound analysis. Experiments on Qwen2-1.5B, Qwen2.5-3B, and LLaMA3.2-3B demonstrate that safety violation rates strictly remain below prescribed thresholds (e.g., 1%), while significantly improving both harmfulness suppression rates and usefulness scores.
๐ Abstract
Existing approaches to language model alignment often treat safety as a tradeoff against helpfulness, which can lead to unacceptable responses in sensitive domains. To ensure reliable performance in such settings, we propose High-Confidence Safe Reinforcement Learning from Human Feedback (HC-RLHF), a method that provides high-confidence safety guarantees while maximizing helpfulness. Similar to previous methods, HC-RLHF explicitly decouples human preferences into helpfulness and harmlessness (safety), which are learned by training a reward model and a cost model, respectively. It then employs a two-step process to find safe solutions. In the first step, it optimizes the reward function under an intentionally pessimistic version of the cost constraint. In the second step, the trained model undergoes a safety test to verify whether its performance stays within an upper-confidence bound of the actual cost constraint. We provide a theoretical analysis of HC-RLHF, including proof that it will not return an unsafe solution with a probability greater than a user-specified threshold. For our empirical analysis, we apply HC-RLHF to align three different language models (Qwen2-1.5B, Qwen2.5-3B, and LLaMa3.2-3B) with human preferences. Our results demonstrate that HC-RLHF produces safe models with high probability and can improve harmlessness and helpfulness compared to previous methods.