Convergent Privacy Loss of Noisy-SGD without Convexity and Smoothness

📅 2024-10-01
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work establishes differential privacy (DP) guarantees for noisy stochastic gradient descent (Noisy-SGD) with *hidden* internal states under non-convex, non-smooth loss functions—bypassing the conventional requirement of exposing all intermediate states and relaxing standard convexity and smoothness assumptions. We propose a novel privacy analysis framework grounded in *offset divergence*, integrating forward Wasserstein distance tracking, optimal offset allocation, and a Hölder reduction lemma. Our analysis yields the first convergent Rényi DP bound for Noisy-SGD under only Hölder-continuous gradients. Moreover, in the smooth strongly convex setting, our privacy bound strictly improves upon the state-of-the-art. Collectively, this work significantly extends the theoretical applicability of Noisy-SGD to weaker regularization conditions and advances the foundational understanding of privacy analysis for hidden-state mechanisms.

Technology Category

Application Category

📝 Abstract
We study the Differential Privacy (DP) guarantee of hidden-state Noisy-SGD algorithms over a bounded domain. Standard privacy analysis for Noisy-SGD assumes all internal states are revealed, which leads to a divergent R'enyi DP bound with respect to the number of iterations. Ye&Shokri (2022) and Altschuler&Talwar (2022) proved convergent bounds for smooth (strongly) convex losses, and raise open questions about whether these assumptions can be relaxed. We provide positive answers by proving convergent R'enyi DP bound for non-convex non-smooth losses, where we show that requiring losses to have H""older continuous gradient is sufficient. We also provide a strictly better privacy bound compared to state-of-the-art results for smooth strongly convex losses. Our analysis relies on the improvement of shifted divergence analysis in multiple aspects, including forward Wasserstein distance tracking, identifying the optimal shifts allocation, and the H"older reduction lemma. Our results further elucidate the benefit of hidden-state analysis for DP and its applicability.
Problem

Research questions and friction points this paper is trying to address.

Convergent R'enyi DP bound
Non-convex non-smooth losses
Hidden-state Noisy-SGD analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Convergent R'enyi DP bound
Non-convex non-smooth losses
Hölder continuous gradient analysis
🔎 Similar Papers
No similar papers found.
Eli Chien
Eli Chien
Visiting Researcher, Google
Regulatable AIMachine UnlearningDifferential PrivacyGraph machine learning
P
Pan Li
Department of Electrical and Computer Engineering, Georgia Institute of Technology, Georgia, U.S.A.