Privacy Guarantee for Nash Equilibrium Computation of Aggregative Games Based on Pointwise Maximal Leakage

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing differential privacy mechanisms inadequately capture correlations among cost-function data in aggregative games, leading to overly loose privacy leakage assessments in Nash equilibrium computation. Method: This paper introduces pointwise maximal leakage (PML)—for the first time in game-theoretic equilibrium settings—proposing a prior-aware PML privacy framework: leveraging known prior distributions over players’ cost-function data, it constructs a pointwise maximal leakage model and derives a computable upper bound on privacy leakage. Contribution/Results: Theoretically, PML strictly refines differential privacy, yielding tighter and more precise privacy guarantees. Empirically, the framework significantly enhances resilience against inference attacks on private cost functions, achieving more accurate, quantifiable, and controllable privacy protection without compromising equilibrium utility.

Technology Category

Application Category

📝 Abstract
Privacy preservation has served as a key metric in designing Nash equilibrium (NE) computation algorithms. Although differential privacy (DP) has been widely employed for privacy guarantees, it does not exploit prior distributional knowledge of datasets and is ineffective in assessing information leakage for correlated datasets. To address these concerns, we establish a pointwise maximal leakage (PML) framework when computing NE in aggregative games. By incorporating prior knowledge of players'cost function datasets, we obtain a precise and computable upper bound of privacy leakage with PML guarantees. In the entire view, we show PML refines DP by offering a tighter privacy guarantee, enabling flexibility in designing NE computation. Also, in the individual view, we reveal that the lower bound of PML can exceed the upper bound of DP by constructing specific correlated datasets. The results emphasize that PML is a more proper privacy measure than DP since the latter fails to adequately capture privacy leakage in correlated datasets. Moreover, we conduct experiments with adversaries who attempt to infer players'private information to illustrate the effectiveness of our framework.
Problem

Research questions and friction points this paper is trying to address.

Establishes PML framework for privacy-preserving Nash equilibrium computation
Provides tighter privacy guarantees than differential privacy for correlated datasets
Enables flexible algorithm design by incorporating prior distributional knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses pointwise maximal leakage framework
Incorporates prior knowledge of datasets
Provides tighter privacy guarantee than DP
🔎 Similar Papers
No similar papers found.
Z
Zhaoyang Cheng
School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
Guanpu Chen
Guanpu Chen
KTH Royal Institute of Technology
OptimizationGame theoryCybersecurityRobustness
T
T. Oechtering
School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
Mikael Skoglund
Mikael Skoglund
KTH Royal Institute of Technology
Information TheoryCommunicationsSignal Processing