🤖 AI Summary
This work addresses the excessive noise introduced by existing Pufferfish privacy mechanisms due to overly stringent constraints, which severely degrades data utility. The authors propose a relaxed noise calibration approach that integrates the 1-Wasserstein (Kantorovich) mechanism, prior belief modeling, and ℓ₁-sensitivity theory to construct a general and practical mechanism generation algorithm. Theoretically, they prove that for any privacy budget and prior, there exists a strictly superior noise reduction scheme, yielding substantially improved utility—especially under low privacy budgets—and that the worst-case 1-Wasserstein mechanism is equivalent to the ℓ₁-sensitivity method. Empirical evaluations on three real-world datasets demonstrate utility gains of 47%–87%, confirming the method’s broad applicability and superiority.
📝 Abstract
This paper introduces a relaxed noise calibration method to enhance data utility while attaining pufferfish privacy. This work builds on the existing $1$-Wasserstein (Kantorovich) mechanism by alleviating the existing overly strict condition that leads to excessive noise, and proposes a practical mechanism design algorithm as a general solution. We prove that a strict noise reduction by our approach always exists compared to $1$-Wasserstein mechanism for all privacy budgets $\epsilon$ and prior beliefs, and the noise reduction (also represents improvement on data utility) gains increase significantly for low privacy budget situations--which are commonly seen in real-world deployments. We also analyze the variation and optimality of the noise reduction with different prior distributions. Moreover, all the properties of the noise reduction still exist in the worst-case $1$-Wasserstein mechanism we introduced, when the additive noise is largest. We further show that the worst-case $1$-Wasserstein mechanism is equivalent to the $\ell_1$-sensitivity method. Experimental results on three real-world datasets demonstrate $47\%$ to $87\%$ improvement in data utility.