A Critical Review on the Effectiveness and Privacy Threats of Membership Inference Attacks

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes the first membership inference attack (MIA) threat assessment framework tailored to real-world deployment conditions, addressing the uncertainty surrounding whether MIAs truly constitute a substantive privacy threat beyond their common use as a proxy metric. Through a systematic literature review and theoretical analysis, the study conducts a unified evaluation of representative MIA methods under practical constraints. The findings reveal that most MIAs pose only limited privacy risks in realistic settings, suggesting that the prevailing practice of treating MIA success as a universal privacy measure may significantly overestimate actual threats. Consequently, this overestimation can lead to unnecessary sacrifices in model utility. By challenging the assumed efficacy of MIAs as default privacy indicators, this research establishes a new paradigm for more grounded and context-aware privacy evaluations.

Technology Category

Application Category

📝 Abstract
Membership inference attacks (MIAs) aim to determine whether a data sample was included in a machine learning (ML) model's training set and have become the de facto standard for measuring privacy leakages in ML. We propose an evaluation framework that defines the conditions under which MIAs constitute a genuine privacy threat, and review representative MIAs against it. We find that, under the realistic conditions defined in our framework, MIAs represent weak privacy threats. Thus, relying on them as a privacy metric in ML can lead to an overestimation of risk and to unnecessary sacrifices in model utility as a consequence of employing too strong defenses.
Problem

Research questions and friction points this paper is trying to address.

membership inference attacks
privacy threats
machine learning
privacy leakage
evaluation framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

membership inference attacks
privacy evaluation framework
machine learning privacy
privacy threat assessment
model utility
🔎 Similar Papers
No similar papers found.
Najeeb Jebreel
Najeeb Jebreel
PhD, Universitat Rovira i Virgili
Machine LearningData PrivacyTrustworthy ML
D
David Sánchez
Universitat Rovira i Virgili, Department of Computer Engineering and Mathematics, CYBERCAT-Center for Cybersecurity Research of Catalonia, ComSCIAM-Center for Computational Science and Applied Mathematics, Av. Països Catalans 26, 43007 Tarragona, Catalonia
Josep Domingo-Ferrer
Josep Domingo-Ferrer
Distinguished Full Professor, Universitat Rovira i Virgili, Director-CYBERCAT, FIEEE, ACM DS
Data protectionPrivacyCybersecurityMachine learningStatistical Disclosure Control