🤖 AI Summary
This paper addresses stochastic optimization under online learning and local differential privacy (LDP), where individual data remain on-device and only privatized gradients—perturbed via an LDP mechanism—are released to estimate a static population parameter.
Method: We integrate LDP with clipped, noise-injected differentially private stochastic gradient descent (DP-SGD), and rigorously characterize the quantitative impact of step size, data dimensionality, and privacy budget ε on convergence rate.
Contribution/Results: We establish the first non-asymptotic convergence guarantee for DP-SGD in finite-sample LDP settings, proving an $O(1/sqrt{T})$ rate. Our analysis uncovers fundamental trade-offs among privacy, statistical accuracy, and dimensionality. Numerical experiments corroborate the theoretical convergence bound and validate sensitivity to hyperparameters. This work fills a critical gap in non-asymptotic analysis for private stochastic optimization and provides an interpretable, tunable theoretical foundation for LDP-based online learning.
📝 Abstract
Differentially Private Stochastic Gradient Descent (DP-SGD) has been widely used for solving optimization problems with privacy guarantees in machine learning and statistics. Despite this, a systematic non-asymptotic convergence analysis for DP-SGD, particularly in the context of online problems and local differential privacy (LDP) models, remains largely elusive. Existing non-asymptotic analyses have focused on non-private optimization methods, and hence are not applicable to privacy-preserving optimization problems. This work initiates the analysis to bridge this gap and opens the door to non-asymptotic convergence analysis of private optimization problems. A general framework is investigated for the online LDP model in stochastic optimization problems. We assume that sensitive information from individuals is collected sequentially and aim to estimate, in real-time, a static parameter that pertains to the population of interest. Most importantly, we conduct a comprehensive non-asymptotic convergence analysis of the proposed estimators in finite-sample situations, which gives their users practical guidelines regarding the effect of various hyperparameters, such as step size, parameter dimensions, and privacy budgets, on convergence rates. Our proposed estimators are validated in the theoretical and practical realms by rigorous mathematical derivations and carefully constructed numerical experiments.