Bayes-Nash Generative Privacy Against Membership Inference Attacks

📅 2024-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low resolution in utility-privacy trade-offs, complex sensitivity computation, and loose privacy guarantees of differential privacy (DP) methods under membership inference attacks (MIA), this paper proposes a novel paradigm—Bayesian-Nash Generative Privacy (BNGP). BNGP formulates a Bayesian game between a defender (generator) and an attacker (discriminator), jointly optimizing a generative adversarial network (GAN) architecture and solving for a Bayesian Nash equilibrium to adaptively optimize data release strategies. Unlike conventional DP approaches, BNGP obviates explicit sensitivity analysis, supports composability of privacy mechanisms, exhibits robustness to attacker priors, and—under ideal conditions—yields provable DP bounds. Experiments demonstrate that BNGP significantly enhances resilience against MIA while preserving high data utility across diverse benchmarks.

Technology Category

Application Category

📝 Abstract
Membership inference attacks (MIAs) expose significant privacy risks by determining whether an individual's data is in a dataset. While differential privacy (DP) mitigates such risks, it has several limitations in achieving an optimal balance between utility and privacy, include limited resolution in expressing this tradeoff in only a few privacy parameters, and intractable sensitivity calculations that may be necessary to provide tight privacy guarantees. We propose a game-theoretic framework that models privacy protection from MIA as a Bayesian game between a defender and an attacker. In this game, a dataset is the defender's private information, with privacy loss to the defender (which is gain to the attacker) captured in terms of the attacker's ability to infer membership of individuals in the dataset. To address the strategic complexity of this game, we represent the mixed strategy of the defender as a neural network generator which maps a private dataset to its public representation (for example, noisy summary statistics), while the mixed strategy of the attacker is captured by a discriminator which makes membership inference claims. We refer to the resulting computational approach as a general-sum Generative Adversarial Network, which is trained iteratively by alternating generator and discriminator updates akin to conventional GANs. We call the defender's data sharing policy thereby obtained Bayes-Nash Generative Privacy (BNGP). The BNGP strategy avoids sensitivity calculations, supports compositions of correlated mechanisms, is robust to the attacker's heterogeneous preferences over true and false positives, and yields provable differential privacy guarantees, albeit in an idealized setting.
Problem

Research questions and friction points this paper is trying to address.

Addresses privacy risks from membership inference attacks
Proposes a game-theoretic framework for privacy protection
Introduces Bayes-Nash Generative Privacy strategy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian game framework
Generative Adversarial Network
Bayes-Nash Generative Privacy
🔎 Similar Papers
No similar papers found.