Balancing Utility and Privacy: Dynamically Private SGD with Random Projection

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the utility degradation caused by static noise and the explosive privacy cost in high-dimensional parameter spaces under private stochastic gradient descent (SGD), this paper proposes D2P2-SGD—an optimizer integrating dynamic differential privacy (adapting noise scale throughout training), random projection for dimensionality reduction, and automated gradient clipping. Under an $(varepsilon,delta)$-differential privacy guarantee, D2P2-SGD enables efficient private learning. We theoretically establish a sublinear convergence rate for non-convex smooth objectives. Empirically, on multiple benchmark datasets, D2P2-SGD consistently improves model accuracy by 2.3–5.1% on average, while reducing both communication and computational overhead—thereby facilitating practical private training of large-scale models.

Technology Category

Application Category

📝 Abstract
Stochastic optimization is a pivotal enabler in modern machine learning, producing effective models for various tasks. However, several existing works have shown that model parameters and gradient information are susceptible to privacy leakage. Although Differentially Private SGD (DPSGD) addresses privacy concerns, its static noise mechanism impacts the error bounds for model performance. Additionally, with the exponential increase in model parameters, efficient learning of these models using stochastic optimizers has become more challenging. To address these concerns, we introduce the Dynamically Differentially Private Projected SGD (D2P2-SGD) optimizer. In D2P2-SGD, we combine two important ideas: (i) dynamic differential privacy (DDP) with automatic gradient clipping and (ii) random projection with SGD, allowing dynamic adjustment of the tradeoff between utility and privacy of the model. It exhibits provably sub-linear convergence rates across different objective functions, matching the best available rate. The theoretical analysis further suggests that DDP leads to better utility at the cost of privacy, while random projection enables more efficient model learning. Extensive experiments across diverse datasets show that D2P2-SGD remarkably enhances accuracy while maintaining privacy. Our code is available here.
Problem

Research questions and friction points this paper is trying to address.

Addressing privacy leakage in stochastic optimization models
Improving utility-privacy tradeoff with dynamic noise mechanisms
Enhancing efficiency in high-dimensional model parameter learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic differential privacy with automatic clipping
Random projection for efficient model learning
Dynamic adjustment of utility-privacy tradeoff
🔎 Similar Papers
No similar papers found.