PAC-DP: Personalized Adaptive Clipping for Differentially Private Federated Learning

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the suboptimal privacy-utility trade-off in traditional differentially private federated learning, which employs a fixed gradient clipping threshold and fails to account for client heterogeneity and varying privacy sensitivities. To overcome this limitation, we propose PAC-DP, a novel framework that introduces the first privacy-budget-aware personalized adaptive clipping mechanism. The server leverages public proxy data to establish a mapping between privacy budgets and clipping thresholds, enabling online adaptive clipping without access to private client data or manual hyperparameter tuning, complemented by a lightweight round scheduling strategy. Integrating per-sample gradient clipping, Gaussian noise injection, and record-level local differential privacy, our method provides convergence guarantees and reproducible privacy accounting. Experiments across multiple federated benchmarks demonstrate that, under identical privacy budgets, PAC-DP improves model accuracy by up to 26% and accelerates convergence by as much as 45.5% compared to fixed-threshold approaches.

Technology Category

Application Category

📝 Abstract
Differential privacy (DP) is crucial for safeguarding sensitive client information in federated learning (FL), yet traditional DP-FL methods rely predominantly on fixed gradient clipping thresholds. Such static clipping neglects significant client heterogeneity and varying privacy sensitivities, which may lead to an unfavorable privacy-utility trade-off. In this paper, we propose PAC-DP, a Personalized Adaptive Clipping framework for federated learning under record-level local differential privacy. PAC-DP introduces a Simulation-CurveFitting approach leveraging a server-hosted public proxy dataset to learn an effective mapping between personalized privacy budgets epsilon and gradient clipping thresholds C, which is then deployed online with a lightweight round-wise schedule. This design enables budget-conditioned threshold selection while avoiding data-dependent tuning during training. We provide theoretical analyses establishing convergence guarantees under the per-example clipping and Gaussian perturbation mechanism and a reproducible privacy accounting procedure. Extensive evaluations on multiple FL benchmarks show that PAC-DP surpasses conventional fixed-threshold approaches under matched privacy budgets, improving accuracy by up to 26% and accelerating convergence by up to 45.5% in our evaluated settings.
Problem

Research questions and friction points this paper is trying to address.

differential privacy
federated learning
gradient clipping
client heterogeneity
privacy-utility trade-off
Innovation

Methods, ideas, or system contributions that make the work stand out.

Personalized Adaptive Clipping
Differential Privacy
Federated Learning
Gradient Clipping
Privacy-Utility Trade-off
Hao Zhou
Hao Zhou
Associate Professor in Nanjing university, Researcher in ByteDance
Large Language ModelInformation RetrievalRAGAgent
S
Siqi Cai
School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
Hua Dai
Hua Dai
Martin V Smith School of Business and Economics, California State University Channel Islands
G
Geng Yang
School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
Jing Luo
Jing Luo
Shandong University
Natural Language Processing
H
Hui Cai
School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China