🤖 AI Summary
This paper addresses two critical challenges in federated learning—privacy leakage and communication overhead—by proposing the Differentially Private Federated Cubic-Regularized Newton (DP-FCRN) algorithm. DP-FCRN is the first to integrate cubic-regularized Newton methods into federated learning, combining local differential privacy (LDP) noise injection, Top-$k$ model sparsification for upload, and distributed second-order optimization. It introduces a novel sparsity-enhanced differential privacy mechanism that reduces the required noise magnitude while guaranteeing $(varepsilon,delta)$-privacy. Theoretical analysis establishes global convergence and characterizes the privacy–utility trade-off. Extensive experiments on benchmark datasets demonstrate that DP-FCRN significantly accelerates convergence and reduces communication rounds compared to first-order baselines, while achieving superior model utility under identical privacy budgets.
📝 Abstract
This paper investigates the use of the cubic-regularized Newton method within a federated learning framework while addressing two major concerns that commonly arise in federated learning: privacy leakage and communication bottleneck. We introduce a federated learning algorithm called Differentially Private Federated Cubic Regularized Newton (DP-FCRN). By leveraging second-order techniques, our algorithm achieves lower iteration complexity compared to first-order methods. We also incorporate noise perturbation during local computations to ensure privacy. Furthermore, we employ sparsification in uplink transmission, which not only reduces the communication costs but also amplifies the privacy guarantee. Specifically, this approach reduces the necessary noise intensity without compromising privacy protection. We analyze the convergence properties of our algorithm and establish the privacy guarantee. Finally, we validate the effectiveness of the proposed algorithm through experiments on a benchmark dataset.