🤖 AI Summary
Existing interpretations of network differential privacy (e.g., ε-edge DP) conflate “edge-level privacy” with “network-level DP guarantees,” erroneously suggesting protection against inference about individual edges—whereas edge DP in fact ensures indistinguishability of *entire network structures* under adversarial hypothesis testing.
Method: We formalize network DP within an adversarial hypothesis testing framework, rigorously analyzing the semantic gap between network-level privacy and edge-level inference robustness.
Contribution/Results: First, we prove that edge DP does not imply robustness against edge-level inference. Second, we quantify the precise privacy guarantee disparity between network-level and edge-level objectives. Third, we derive sufficient conditions to bridge this semantic gap and develop an abstract analytical framework applicable across diverse network DP definitions (e.g., node DP, weighted edge DP). Our work establishes a more rigorous theoretical foundation for designing, evaluating, and interpreting network DP mechanisms.
📝 Abstract
How do we interpret the differential privacy (DP) guarantee for network data? We take a deep dive into a popular form of network DP ($varepsilon$--edge DP) to find that many of its common interpretations are flawed. Drawing on prior work for privacy with correlated data, we interpret DP through the lens of adversarial hypothesis testing and demonstrate a gap between the pairs of hypotheses actually protected under DP (tests of complete networks) and the sorts of hypotheses implied to be protected by common claims (tests of individual edges). We demonstrate some conditions under which this gap can be bridged, while leaving some questions open. While some discussion is specific to edge DP, we offer selected results in terms of abstract DP definitions and provide discussion of the implications for other forms of network DP.