Improved Algorithms for Differentially Private Language Model Alignment

📅 2025-05-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from degraded alignment performance under differential privacy (DP) constraints, posing a fundamental challenge to privacy-preserving LLM deployment. Method: This paper proposes the first unified algorithmic framework that jointly optimizes privacy guarantees and alignment quality, compatible with both Direct Preference Optimization (DPO) and Reinforcement Learning from Human Feedback (RLHF). Its core innovation is DP-AdamW—a novel differentially private optimizer—and the first systematic theoretical modeling of the tripartite trade-off among privacy budget (ε), alignment fidelity, and computational overhead. Contribution/Results: The framework provides rigorous theoretical guarantees and practical hyperparameter guidance. Extensive experiments on large-scale models demonstrate that DP-AdamW combined with DPO achieves up to 15% improvement in alignment quality at ε = 2–5, significantly outperforming existing DP-aware alignment methods and establishing new state-of-the-art performance.

Technology Category

Application Category

📝 Abstract
Language model alignment is crucial for ensuring that large language models (LLMs) align with human preferences, yet it often involves sensitive user data, raising significant privacy concerns. While prior work has integrated differential privacy (DP) with alignment techniques, their performance remains limited. In this paper, we propose novel algorithms for privacy-preserving alignment and rigorously analyze their effectiveness across varying privacy budgets and models. Our framework can be deployed on two celebrated alignment techniques, namely direct preference optimization (DPO) and reinforcement learning from human feedback (RLHF). Through systematic experiments on large-scale language models, we demonstrate that our approach achieves state-of-the-art performance. Notably, one of our algorithms, DP-AdamW, combined with DPO, surpasses existing methods, improving alignment quality by up to 15% under moderate privacy budgets ({epsilon}=2-5). We further investigate the interplay between privacy guarantees, alignment efficacy, and computational demands, providing practical guidelines for optimizing these trade-offs.
Problem

Research questions and friction points this paper is trying to address.

Enhancing privacy-preserving alignment of large language models
Improving performance of differential privacy in alignment techniques
Optimizing trade-offs between privacy, alignment quality, and computational costs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel algorithms for privacy-preserving alignment
Framework deployable on DPO and RLHF techniques
DP-AdamW with DPO improves alignment by 15%
🔎 Similar Papers
2024-06-05arXiv.orgCitations: 1
K
Keyu Chen
Peking University
H
Hao Tang
Peking University
Qinglin Liu
Qinglin Liu
Harbin Institute of Technology
Y
Yizhao Xu
Peking University