An Improved Privacy and Utility Analysis of Differentially Private SGD with Bounded Domain and Smooth Losses

📅 2025-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the privacy-utility trade-off of Differentially Private Stochastic Gradient Descent (DPSGD) for smooth loss functions, relaxing conventional strong assumptions such as convexity and bounded gradients, and unifying analysis across both bounded and unbounded parameter domains. Method: We first prove that privacy loss convergence of DPSGD in bounded domains holds without convexity. We then introduce a novel multi-step privacy accounting technique leveraging the noise-smoothing reduction property, and conduct tight utility analysis by integrating projection non-expansiveness with gradient clipping. Contributions/Results: Theoretically, we establish that a smaller parameter domain diameter simultaneously improves privacy guarantees (reducing ε) and utility (decreasing estimation error), demonstrating the feasibility of privacy-utility co-optimization. Empirically, our approach achieves significantly higher test accuracy under the same privacy budget, or reduces the required privacy budget by 20%–40% to attain comparable accuracy.

Technology Category

Application Category

📝 Abstract
Differentially Private Stochastic Gradient Descent (DPSGD) is widely used to protect sensitive data during the training of machine learning models, but its privacy guarantees often come at the cost of model performance, largely due to the inherent challenge of accurately quantifying privacy loss. While recent efforts have strengthened privacy guarantees by focusing solely on the final output and bounded domain cases, they still impose restrictive assumptions, such as convexity and other parameter limitations, and often lack a thorough analysis of utility. In this paper, we provide rigorous privacy and utility characterization for DPSGD for smooth loss functions in both bounded and unbounded domains. We track the privacy loss over multiple iterations by exploiting the noisy smooth-reduction property and establish the utility analysis by leveraging the projection's non-expansiveness and clipped SGD properties. In particular, we show that for DPSGD with a bounded domain, (i) the privacy loss can still converge without the convexity assumption, and (ii) a smaller bounded diameter can improve both privacy and utility simultaneously under certain conditions. Numerical results validate our results.
Problem

Research questions and friction points this paper is trying to address.

Enhance privacy and utility in DPSGD
Analyze smooth loss functions in bounded domains
Converge privacy loss without convexity assumption
Innovation

Methods, ideas, or system contributions that make the work stand out.

DPSGD with bounded domain
Noisy smooth-reduction property
Projection's non-expansiveness leveraged
🔎 Similar Papers
No similar papers found.