🤖 AI Summary
This paper addresses online federated learning under streaming, non-i.i.d. data with time-varying distributions, where conventional independent noise mechanisms in local gradient perturbation suffer from degraded utility due to correlation among successive updates and frequent model revisions.
Method: Under (ε,δ)-differential privacy constraints, we propose the first time-correlated noise mechanism for local gradient perturbation. Leveraging perturbed iterate analysis and a quasi-strong convexity assumption, we derive a tight upper bound on dynamic regret that explicitly quantifies the trade-off among privacy, utility, and non-stationarity.
Contribution/Results: We establish the first theoretical framework for analyzing dynamic regret in differentially private online federated learning under time-varying environments. Experiments demonstrate that our approach significantly improves convergence speed and model accuracy over independent-noise baselines, empirically validating the efficacy of time-correlated noise for dynamic privacy-preserving learning.
📝 Abstract
We introduce a novel differentially private algorithm for online federated learning that employs temporally correlated noise to enhance utility while ensuring privacy of continuously released models. To address challenges posed by DP noise and local updates with streaming non-iid data, we develop a perturbed iterate analysis to control the impact of the DP noise on the utility. Moreover, we demonstrate how the drift errors from local updates can be effectively managed under a quasi-strong convexity condition. Subject to an $(epsilon, delta)$-DP budget, we establish a dynamic regret bound over the entire time horizon, quantifying the impact of key parameters and the intensity of changes in dynamic environments. Numerical experiments confirm the efficacy of the proposed algorithm.