🤖 AI Summary
This paper addresses the weakening of differential privacy (DP) guarantees in settings with correlated data, where conventional privacy budgets fail to accurately quantify actual information leakage. To resolve this, it introduces Pointwise Maximal Leakage (PML) into the DP analytical framework for the first time. Leveraging information-theoretic modeling, the method characterizes how data correlations amplify privacy leakage—thereby relaxing the standard i.i.d. assumption—and establishes a PML-based quantification framework for privacy loss under correlation. Theoretical analysis derives tight bounds on privacy loss, while empirical evaluation across diverse correlated datasets demonstrates that PML more precisely reflects actual leakage than the classical ε-parameter, significantly enhancing the verifiability and practicality of privacy mechanism design. The core contribution is the first theoretically grounded and empirically validated DP quantification paradigm explicitly tailored to correlated data.