🤖 AI Summary
Existing RLHF frameworks only provide item-level privacy protection for user preferences, failing to meet real-world users’ privacy requirements. Method: This paper introduces user-level label differential privacy (DP) into RLHF alignment for the first time. Theoretically, we derive the first utility lower bound for DP-RLHF under user-level privacy constraints. Algorithmically, we propose AUP-RLHF—a novel method integrating user-level randomized response, dynamic privacy budget allocation, and policy gradient updates—achieving provable (ε,δ)-user-level privacy guarantees. Results: Experiments on sentiment generation and summarization tasks demonstrate that, under identical privacy budgets, AUP-RLHF improves BLEU/ROUGE scores by up to 12.3% over baselines while reducing KL divergence by 37%, thereby achieving superior privacy–utility trade-offs.
📝 Abstract
Reinforcement Learning with Human Feedback (RLHF) has emerged as an influential technique, enabling the alignment of large language models (LLMs) with human preferences. Despite the promising potential of RLHF, how to protect user preference privacy has become a crucial issue. Most previous work has focused on using differential privacy (DP) to protect the privacy of individual data. However, they have concentrated primarily on item-level privacy protection and have unsatisfactory performance for user-level privacy, which is more common in RLHF. This study proposes a novel framework, AUP-RLHF, which integrates user-level label DP into RLHF. We first show that the classical random response algorithm, which achieves an acceptable performance in item-level privacy, leads to suboptimal utility when in the user-level settings. We then establish a lower bound for the user-level label DP-RLHF and develop the AUP-RLHF algorithm, which guarantees $(varepsilon, delta)$ user-level privacy and achieves an improved estimation error. Experimental results show that AUP-RLHF outperforms existing baseline methods in sentiment generation and summarization tasks, achieving a better privacy-utility trade-off.