Correlated Noise Mechanisms for Differentially Private Learning

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the suboptimal privacy–utility trade-off in differential privacy (DP) machine learning, this paper introduces a *correlated noise mechanism* tailored for private training, overcoming the limitations of independent noise injection in standard SGD. Methodologically, it proposes the first systematic theoretical framework for correlated noise, rigorously characterizing the noise-cancellation effect of (anti-)correlated noise in weighted prefix-sum estimation—and unifying matrix mechanisms, factorization mechanisms, and the DP-FTRL paradigm under a single principle. By structuring noise to exploit temporal and structural dependencies in gradient aggregation, the mechanism achieves strict DP guarantees while substantially improving utility. Experiments demonstrate that, in industrial-scale deployments, it reduces privacy budget consumption by 30%–50% and attains model accuracy nearly matching non-private baselines. This work establishes a scalable, high-fidelity paradigm for large-scale private AI training.

Technology Category

Application Category

📝 Abstract
This monograph explores the design and analysis of correlated noise mechanisms for differential privacy (DP), focusing on their application to private training of AI and machine learning models via the core primitive of estimation of weighted prefix sums. While typical DP mechanisms inject independent noise into each step of a stochastic gradient (SGD) learning algorithm in order to protect the privacy of the training data, a growing body of recent research demonstrates that introducing (anti-)correlations in the noise can significantly improve privacy-utility trade-offs by carefully canceling out some of the noise added on earlier steps in subsequent steps. Such correlated noise mechanisms, known variously as matrix mechanisms, factorization mechanisms, and DP-Follow-the-Regularized-Leader (DP-FTRL) when applied to learning algorithms, have also been influential in practice, with industrial deployment at a global scale.
Problem

Research questions and friction points this paper is trying to address.

Design correlated noise for differential privacy in AI training
Improve privacy-utility trade-offs via noise cancellation in SGD
Apply matrix mechanisms to large-scale industrial deployments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Correlated noise mechanisms for differential privacy
Matrix and factorization mechanisms in DP
DP-FTRL for improved privacy-utility trade-offs
🔎 Similar Papers
No similar papers found.