Beyond Membership: Limitations of Add/Remove Adjacency in Differential Privacy

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In differentially private (DP) machine learning, protecting attribute-level privacy—such as labels in supervised fine-tuning—is inadequately addressed by conventional add/remove adjacency, which implicitly safeguards dataset membership rather than individual attributes, thereby overestimating privacy guarantees. Method: The paper advocates substituting adjacency—where neighboring datasets differ by swapping one record’s attribute—and develops a tighter privacy budget accounting framework accordingly. It further designs a novel attribute-level privacy auditing attack to expose attribute leakage masked under add/remove mechanisms. Contribution/Results: Experiments demonstrate that, under identical DP budgets, substitute adjacency yields privacy claims closely aligned with empirical attribute leakage, whereas add/remove fails to ensure attribute-level privacy. This work clarifies the fundamental impact of adjacency selection on attribute privacy, establishing a more precise theoretical foundation and practical toolkit for DP training.

Technology Category

Application Category

📝 Abstract
Training machine learning models with differential privacy (DP) limits an adversary's ability to infer sensitive information about the training data. It can be interpreted as a bound on adversary's capability to distinguish two adjacent datasets according to chosen adjacency relation. In practice, most DP implementations use the add/remove adjacency relation, where two datasets are adjacent if one can be obtained from the other by adding or removing a single record, thereby protecting membership. In many ML applications, however, the goal is to protect attributes of individual records (e.g., labels used in supervised fine-tuning). We show that privacy accounting under add/remove overstates attribute privacy compared to accounting under the substitute adjacency relation, which permits substituting one record. To demonstrate this gap, we develop novel attacks to audit DP under substitute adjacency, and show empirically that audit results are inconsistent with DP guarantees reported under add/remove, yet remain consistent with the budget accounted under the substitute adjacency relation. Our results highlight that the choice of adjacency when reporting DP guarantees is critical when the protection target is per-record attributes rather than membership.
Problem

Research questions and friction points this paper is trying to address.

Addresses limitations of add/remove adjacency in differential privacy for attribute protection
Demonstrates substitute adjacency relation better safeguards individual record attributes
Highlights importance of adjacency choice in DP guarantees for non-membership goals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Substitute adjacency relation for attribute privacy
Novel attacks to audit differential privacy
Highlight adjacency choice importance in reporting
🔎 Similar Papers
No similar papers found.