🤖 AI Summary
This work addresses privacy-preserving data release under a realistic threat model where the adversary’s prior knowledge is constrained by an entropy lower bound (H(X) ≥ b), thereby relaxing the independence assumption inherent in differential privacy. The authors develop a unified framework for privacy analysis and mechanism design tailored to adversaries with bounded prior knowledge. Focusing on three core objectives—minimizing worst-case leakage under a given distortion budget, minimizing distortion under a leakage constraint, and bounding single-record maximum leakage—they propose an alternating optimization algorithm grounded in convex-concave duality to efficiently compute the leakage-distortion trade-off in high-dimensional spaces. Experiments on binary symmetric channels and modular addition queries demonstrate that the proposed approach achieves superior privacy-utility trade-offs compared to classical differential privacy mechanisms and further enables rigorous privacy risk auditing and mechanism certification.
📝 Abstract
The exponential growth of data collection necessitates robust privacy protections that preserve data utility. We address information disclosure against adversaries with bounded prior knowledge, modeled by an entropy constraint $H(X) \geq b$. Within this information privacy framework -- which replaces differential privacy's independence assumption with a bounded-knowledge model -- we study three core problems: maximal per-record leakage, the primal leakage-distortion tradeoff (minimizing worst-case leakage under distortion $D$), and the dual distortion minimization (minimizing distortion under leakage constraint $L$). These problems resemble classical information-theoretic ones (channel capacity, rate-distortion) but are more complex due to high dimensionality and the entropy constraint. We develop efficient alternating optimization algorithms that exploit convexity-concavity duality, with theoretical guarantees including local convergence for the primal problem and convergence to a stationary point for the dual. Experiments on binary symmetric channels and modular sum queries validate the algorithms, showing improved privacy-utility tradeoffs over classical differential privacy mechanisms. This work provides a computational framework for auditing privacy risks and designing certified mechanisms under realistic adversary assumptions.