Evaluating the Defense Potential of Machine Unlearning against Membership Inference Attacks

📅 2025-08-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically evaluates the efficacy of machine unlearning as a defense against membership inference attacks (MIAs). Addressing the open question of whether unlearning inherently enhances MIA resilience, we empirically benchmark prominent unlearning methods—including Exact Unlearning (EU), SISA, and Approximate Model Unlearning (AMU)—against standard MIAs (e.g., shadow training and loss-based attacks) across four multi-domain benchmark datasets spanning image and tabular modalities. Our results demonstrate that machine unlearning is not, by itself, an effective MIA defense; its privacy-preserving effect is highly contingent on both the specific unlearning algorithm and data characteristics—some approaches even exacerbate model vulnerability to MIAs. Crucially, this study is the first to reveal a non-monotonic relationship between unlearning strategies and MIA robustness. These findings provide critical empirical evidence and actionable algorithm-selection guidelines for designing privacy-enhancing systems targeting MIA resistance.

Technology Category

Application Category

📝 Abstract
Membership Inference Attacks (MIAs) pose a significant privacy risk, as they enable adversaries to determine whether a specific data point was included in the training dataset of a model. While Machine Unlearning is primarily designed as a privacy mechanism to efficiently remove private data from a machine learning model without the need for full retraining, its impact on the susceptibility of models to MIA remains an open question. In this study, we systematically assess the vulnerability of models to MIA after applying state-of-art Machine Unlearning algorithms. Our analysis spans four diverse datasets (two from the image domain and two in tabular format), exploring how different unlearning approaches influence the exposure of models to membership inference. The findings highlight that while Machine Unlearning is not inherently a countermeasure against MIA, the unlearning algorithm and data characteristics can significantly affect a model's vulnerability. This work provides essential insights into the interplay between Machine Unlearning and MIAs, offering guidance for the design of privacy-preserving machine learning systems.
Problem

Research questions and friction points this paper is trying to address.

Assessing machine unlearning's defense against membership inference attacks
Evaluating vulnerability after applying state-of-art unlearning algorithms
Examining how unlearning approaches affect model exposure to MIAs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates machine unlearning against membership inference
Tests state-of-art unlearning algorithms on datasets
Analyzes unlearning impact on model vulnerability
🔎 Similar Papers
No similar papers found.
A
Aristeidis Sidiropoulos
Democritus University of Thrace, Greece
C
Christos Chrysanthos Nikolaidis
T
Theodoros Tsiolakis
N
Nikolaos Pavlidis
V
Vasilis Perifanis
Pavlos S. Efraimidis
Pavlos S. Efraimidis
Professor, ECE, Democritus University of Thrace and affiliated member of Athena RC
AlgorithmsFederated Machine LearningPrivacySocial Network AnalysisAlgorithmic Game Theory