🤖 AI Summary
This work addresses the challenge of differential privacy (DP) analysis for stochastic gradient descent (SGD) under heavy-tailed noise—a setting where existing DP frameworks, relying on light-tailed assumptions or suffering from strong dimension dependence, fail to extend to Rényi differential privacy (RDP). We establish the first RDP guarantees for heavy-tailed stochastic differential equations (SDEs) and their discrete SGD approximations. Our method introduces fractional Poincaré inequalities, integrates Rényi divergence flow analysis with continuous–discrete joint modeling, and yields a unified privacy tracking framework. The resulting RDP bounds exhibit significantly weakened dependence on parameter dimension and eliminate the need for gradient clipping—enabling rigorous privacy control even under heavy-tailed noise. This advances DP theory by simultaneously overcoming limitations imposed by tail behavior assumptions and the scope of privacy definitions, thereby broadening the applicability of differential privacy to modern deep learning settings.
📝 Abstract
Characterizing the differential privacy (DP) of learning algorithms has become a major challenge in recent years. In parallel, many studies suggested investigating the behavior of stochastic gradient descent (SGD) with heavy-tailed noise, both as a model for modern deep learning models and to improve their performance. However, most DP bounds focus on light-tailed noise, where satisfactory guarantees have been obtained but the proposed techniques do not directly extend to the heavy-tailed setting. Recently, the first DP guarantees for heavy-tailed SGD were obtained. These results provide $(0,delta)$-DP guarantees without requiring gradient clipping. Despite casting new light on the link between DP and heavy-tailed algorithms, these results have a strong dependence on the number of parameters and cannot be extended to other DP notions like the well-established R'enyi differential privacy (RDP). In this work, we propose to address these limitations by deriving the first RDP guarantees for heavy-tailed SDEs, as well as their discretized counterparts. Our framework is based on new R'enyi flow computations and the use of well-established fractional Poincar'e inequalities. Under the assumption that such inequalities are satisfied, we obtain DP guarantees that have a much weaker dependence on the dimension compared to prior art.