Evaluating Differential Privacy on Correlated Datasets Using Pointwise Maximal Leakage

📅 2025-02-08
🏛️ Annual Privacy Forum
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the weakening of differential privacy (DP) guarantees in settings with correlated data, where conventional privacy budgets fail to accurately quantify actual information leakage. To resolve this, it introduces Pointwise Maximal Leakage (PML) into the DP analytical framework for the first time. Leveraging information-theoretic modeling, the method characterizes how data correlations amplify privacy leakage—thereby relaxing the standard i.i.d. assumption—and establishes a PML-based quantification framework for privacy loss under correlation. Theoretical analysis derives tight bounds on privacy loss, while empirical evaluation across diverse correlated datasets demonstrates that PML more precisely reflects actual leakage than the classical ε-parameter, significantly enhancing the verifiability and practicality of privacy mechanism design. The core contribution is the first theoretically grounded and empirically validated DP quantification paradigm explicitly tailored to correlated data.

Technology Category

Application Category

Problem

Research questions and friction points this paper is trying to address.

Assessing differential privacy on correlated datasets.
Quantifying information leakage using pointwise maximal leakage.
Highlighting limitations of DP in preserving privacy.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages Pointwise Maximal Leakage
Assesses Differential Privacy weaknesses
Highlights need for tailored mechanisms
🔎 Similar Papers
No similar papers found.