Federated Cubic Regularized Newton Learning with Sparsification-amplified Differential Privacy

📅 2024-08-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses two critical challenges in federated learning—privacy leakage and communication overhead—by proposing the Differentially Private Federated Cubic-Regularized Newton (DP-FCRN) algorithm. DP-FCRN is the first to integrate cubic-regularized Newton methods into federated learning, combining local differential privacy (LDP) noise injection, Top-$k$ model sparsification for upload, and distributed second-order optimization. It introduces a novel sparsity-enhanced differential privacy mechanism that reduces the required noise magnitude while guaranteeing $(varepsilon,delta)$-privacy. Theoretical analysis establishes global convergence and characterizes the privacy–utility trade-off. Extensive experiments on benchmark datasets demonstrate that DP-FCRN significantly accelerates convergence and reduces communication rounds compared to first-order baselines, while achieving superior model utility under identical privacy budgets.

Technology Category

Application Category

📝 Abstract
This paper investigates the use of the cubic-regularized Newton method within a federated learning framework while addressing two major concerns that commonly arise in federated learning: privacy leakage and communication bottleneck. We introduce a federated learning algorithm called Differentially Private Federated Cubic Regularized Newton (DP-FCRN). By leveraging second-order techniques, our algorithm achieves lower iteration complexity compared to first-order methods. We also incorporate noise perturbation during local computations to ensure privacy. Furthermore, we employ sparsification in uplink transmission, which not only reduces the communication costs but also amplifies the privacy guarantee. Specifically, this approach reduces the necessary noise intensity without compromising privacy protection. We analyze the convergence properties of our algorithm and establish the privacy guarantee. Finally, we validate the effectiveness of the proposed algorithm through experiments on a benchmark dataset.
Problem

Research questions and friction points this paper is trying to address.

Addresses privacy leakage in federated learning
Reduces communication costs in federated learning
Improves convergence with second-order methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cubic-regularized Newton method in federated learning
Noise perturbation ensures differential privacy
Sparsification reduces communication and amplifies privacy
🔎 Similar Papers
No similar papers found.
Wei Huo
Wei Huo
Wireless Technology Lab, 2012, Huawei
Agentic AIMulti-agent systems
Changxin Liu
Changxin Liu
East China University of Science and Technology
Distributed optimizationCyber-physical systemsModel predictive control
Kemi Ding
Kemi Ding
Research Fellow, Nanyang Technological University
Cyber-physical systemgame theorygraph signal processing
K
K. H. Johansson
Division of Decision and Control Systems, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, and also with Digital Futures, SE-10044 Stockholm, Sweden
L
Ling Shi
Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology, Hong Kong