π€ AI Summary
This work investigates the information contraction properties of discrete privacy mechanisms under bounded pointwise maximal leakage (PML), moving beyond the constraints of local differential privacy (LDP). By introducing a minimal probability mass constraint, the authors generalize the LDP framework to the broader PML setting. They characterize the contraction behavior of discrete Markov kernels via the Dobrushin coefficient and extend this analysis to arbitrary f-divergences using Binetteβs inequality. The main contributions include establishing a theoretical link between PML and the Dobrushin coefficient, deriving tight information contraction bounds applicable to any discrete privacy mechanism, and constructing mechanisms that achieve these bounds. These results significantly improve upon existing guarantees in both LDP and more general privacy scenarios.
π Abstract
We investigate Dobrushin coefficients of discrete Markov kernels that have bounded pointwise maximal leakage (PML) with respect to all distributions with a minimum probability mass bounded away from zero by a constant $c>0$. This definition recovers local differential privacy (LDP) for $c\to 0$. We derive achievable bounds on contraction in terms of a kernels PML guarantees, and provide mechanism constructions that achieve the presented bounds. Further, we extend the results to general $f$-divergences by an application of Binette's inequality. Our analysis yields tighter bounds for mechanisms satisfying LDP and extends beyond the LDP regime to any discrete kernel.