Learning with Locally Private Examples by Inverse Weierstrass Private Stochastic Gradient Descent

๐Ÿ“… 2026-02-18
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the bias introduced by one-shot noise mechanisms under non-interactive local differential privacy (LDP), which degrades the accuracy of downstream models. It is the first to apply the Weierstrass transform and its inverse to model and correct LDP-induced bias, proposing an inverse Weierstrass transform method to enable unbiased estimation of nonlinear functions. Building upon this, the authors develop a novel stochastic gradient descent algorithm, IWP-SGD, which theoretically converges to the true population risk minimizer at a rate of ๐’ช(1/n) while maintaining both unbiasedness and computational efficiency. Extensive experiments on synthetic and real-world binary classification datasets demonstrate that IWP-SGD significantly outperforms existing LDP learning methods in terms of accuracy and utility.

Technology Category

Application Category

๐Ÿ“ Abstract
Releasing data once and for all under noninteractive Local Differential Privacy (LDP) enables complete data reusability, but the resulting noise may create bias in subsequent analyses. In this work, we leverage the Weierstrass transform to characterize this bias in binary classification. We prove that inverting this transform leads to a bias-correction method to compute unbiased estimates of nonlinear functions on examples released under LDP. We then build a novel stochastic gradient descent algorithm called Inverse Weierstrass Private SGD (IWP-SGD). It converges to the true population risk minimizer at a rate of $\mathcal{O}(1/n)$, with $n$ the number of examples. We empirically validate IWP-SGD on binary classification tasks using synthetic and real-world datasets.
Problem

Research questions and friction points this paper is trying to address.

Local Differential Privacy
bias correction
Weierstrass transform
private data release
stochastic gradient descent
Innovation

Methods, ideas, or system contributions that make the work stand out.

Local Differential Privacy
Weierstrass Transform
Bias Correction
Private SGD
Nonlinear Estimation
๐Ÿ”Ž Similar Papers
J
Jean Dufraiche
Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9189 - CRIStAL, F-59000 Lille, France
P
Paul Mangold
CMAP, CNRS, ร‰cole polytechnique, Institut Polytechnique de Paris, 91120 Palaiseau, France
Michaรซl Perrot
Michaรซl Perrot
Researcher INRIA
Machine LearningFair Machine LearningComparison-based LearningMetric LearningLearning Theory
Marc Tommasi
Marc Tommasi
Professor of Computer Science, Lille University
Machine learningFormal Tree Languages