๐ค AI Summary
This work addresses the limitation of existing information contraction bounds, which are largely confined to $(\varepsilon,0)$-local differential privacy (LDP) and thus fail to characterize the information loss in $(\varepsilon,\delta)$-LDP mechanisms with $\delta > 0$. By leveraging the mathematical structure of differential privacy together with strong data processing inequalities from information theory, this paper establishes the first linear and nonlinear information contraction bounds for $(\varepsilon,\delta)$-LDP that hold for any $\delta \geq 0$. The derived general strong data processing inequality applies to both hockey-stick divergence and general $f$-divergences, thereby overcoming the prior restriction to $\delta = 0$. Moreover, it yields tighter theoretical guarantees under various information metrics, including total variation distance, significantly extending and improving upon existing results.
๐ Abstract
The distinguishability quantified by information measures after being processed by a private mechanism has been a useful tool in studying various statistical and operational tasks while ensuring privacy. To this end, standard data-processing inequalities and strong data-processing inequalities (SDPI) are employed. Most of the previously known and even tight characterizations of contraction of information measures, including total variation distance, hockey-stick divergences, and $f$-divergences, are applicable for $(\varepsilon,0)$-local differential private (LDP) mechanisms. In this work, we derive both linear and non-linear strong data-processing inequalities for hockey-stick divergence and $f$-divergences that are valid for all $(\varepsilon,\delta)$-LDP mechanisms even when $\delta \neq 0$. Our results either generalize or improve the previously known bounds on the contraction of these distinguishability measures.