๐ค AI Summary
This work addresses the bias introduced by one-shot noise mechanisms under non-interactive local differential privacy (LDP), which degrades the accuracy of downstream models. It is the first to apply the Weierstrass transform and its inverse to model and correct LDP-induced bias, proposing an inverse Weierstrass transform method to enable unbiased estimation of nonlinear functions. Building upon this, the authors develop a novel stochastic gradient descent algorithm, IWP-SGD, which theoretically converges to the true population risk minimizer at a rate of ๐ช(1/n) while maintaining both unbiasedness and computational efficiency. Extensive experiments on synthetic and real-world binary classification datasets demonstrate that IWP-SGD significantly outperforms existing LDP learning methods in terms of accuracy and utility.
๐ Abstract
Releasing data once and for all under noninteractive Local Differential Privacy (LDP) enables complete data reusability, but the resulting noise may create bias in subsequent analyses. In this work, we leverage the Weierstrass transform to characterize this bias in binary classification. We prove that inverting this transform leads to a bias-correction method to compute unbiased estimates of nonlinear functions on examples released under LDP. We then build a novel stochastic gradient descent algorithm called Inverse Weierstrass Private SGD (IWP-SGD). It converges to the true population risk minimizer at a rate of $\mathcal{O}(1/n)$, with $n$ the number of examples. We empirically validate IWP-SGD on binary classification tasks using synthetic and real-world datasets.