🤖 AI Summary
This work analyzes the mixing time of Projected Langevin Algorithm (PLA) and the differential privacy guarantees of Differentially Private Stochastic Gradient Descent (DP-SGD), focusing on three key settings: nonsmooth convex, weakly smooth, and dissipative objective functions. Departing from conventional nonexpansiveness assumptions, we extend the Privacy Amplification by Iteration (PABI) framework to non-nonexpansive noisy iterations for the first time, and develop a unified analytical paradigm grounded in the modulus of continuity of the gradient mapping. Leveraging modulus-based characterization, Rényi divergence optimization, and projection-based stochastic differential equation discretization, we derive dimension-free mixing time upper bounds with logarithmic dependence on accuracy, as well as tight privacy bounds that explicitly depend on gradient regularity. All results are closed-form, either establishing new guarantees or substantially improving upon prior work.
📝 Abstract
We study the mixing time of the projected Langevin algorithm (LA) and the privacy curve of noisy Stochastic Gradient Descent (SGD), beyond nonexpansive iterations. Specifically, we derive new mixing time bounds for the projected LA which are, in some important cases, dimension-free and poly-logarithmic on the accuracy, closely matching the existing results in the smooth convex case. Additionally, we establish new upper bounds for the privacy curve of the subsampled noisy SGD algorithm. These bounds show a crucial dependency on the regularity of gradients, and are useful for a wide range of convex losses beyond the smooth case. Our analysis relies on a suitable extension of the Privacy Amplification by Iteration (PABI) framework (Feldman et al., 2018; Altschuler and Talwar, 2022, 2023) to noisy iterations whose gradient map is not necessarily nonexpansive. This extension is achieved by designing an optimization problem which accounts for the best possible R'enyi divergence bound obtained by an application of PABI, where the tractability of the problem is crucially related to the modulus of continuity of the associated gradient mapping. We show that, in several interesting cases -- including the nonsmooth convex, weakly smooth and (strongly) dissipative -- such optimization problem can be solved exactly and explicitly. This yields the tightest possible PABI-based bounds, where our results are either new or substantially sharper than those in previous works.