Disparate Privacy Vulnerability: Targeted Attribute Inference Attacks and Defenses

📅 2025-04-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the “differential privacy vulnerability”—the exacerbation of attribute inference attacks (AIAs) against fragile subpopulations in privacy-sensitive domains. We propose the first targeted attack paradigm for high-risk subgroups, comprising differential vulnerability identification and two types of targeted AIAs. We formally define “differential privacy vulnerability” and establish a targeted attack framework grounded in distributional shift analysis, adversarial subgroup identification, and customized attack modeling. To counter this threat, we design a utility-preserving differential vulnerability mitigation mechanism integrating regularization-based privacy constraints. Experiments demonstrate that our targeted attacks achieve significantly higher inference accuracy on vulnerable subgroups compared to non-targeted baselines. Moreover, our defense method fully blocks targeted attacks while preserving the original model’s accuracy within ±0.5%. This work advances both the understanding of fine-grained privacy risks and the development of subgroup-aware privacy defenses.

Technology Category

Application Category

📝 Abstract
As machine learning (ML) technologies become more prevalent in privacy-sensitive areas like healthcare and finance, eventually incorporating sensitive information in building data-driven algorithms, it is vital to scrutinize whether these data face any privacy leakage risks. One potential threat arises from an adversary querying trained models using the public, non-sensitive attributes of entities in the training data to infer their private, sensitive attributes, a technique known as the attribute inference attack. This attack is particularly deceptive because, while it may perform poorly in predicting sensitive attributes across the entire dataset, it excels at predicting the sensitive attributes of records from a few vulnerable groups, a phenomenon known as disparate vulnerability. This paper illustrates that an adversary can take advantage of this disparity to carry out a series of new attacks, showcasing a threat level beyond previous imagination. We first develop a novel inference attack called the disparity inference attack, which targets the identification of high-risk groups within the dataset. We then introduce two targeted variations of the attribute inference attack that can identify and exploit a vulnerable subset of the training data, marking the first instances of targeted attacks in this category, achieving significantly higher accuracy than untargeted versions. We are also the first to introduce a novel and effective disparity mitigation technique that simultaneously preserves model performance and prevents any risk of targeted attacks.
Problem

Research questions and friction points this paper is trying to address.

Targeted attribute inference attacks exploit vulnerable data subsets
Disparate privacy vulnerability risks in machine learning models
Mitigating privacy leaks while preserving model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed disparity inference attack for high-risk groups
Introduced targeted attribute inference attack variations
Proposed disparity mitigation technique preserving model performance
🔎 Similar Papers
No similar papers found.
E
Ehsanul Kabir
Pennsylvania State University
L
Lucas Craig
Pennsylvania State University
Shagufta Mehnaz
Shagufta Mehnaz
The Pennsylvania State University
Information Security & Privacy