🤖 AI Summary
This work proposes the first membership inference attack (MIA) threat assessment framework tailored to real-world deployment conditions, addressing the uncertainty surrounding whether MIAs truly constitute a substantive privacy threat beyond their common use as a proxy metric. Through a systematic literature review and theoretical analysis, the study conducts a unified evaluation of representative MIA methods under practical constraints. The findings reveal that most MIAs pose only limited privacy risks in realistic settings, suggesting that the prevailing practice of treating MIA success as a universal privacy measure may significantly overestimate actual threats. Consequently, this overestimation can lead to unnecessary sacrifices in model utility. By challenging the assumed efficacy of MIAs as default privacy indicators, this research establishes a new paradigm for more grounded and context-aware privacy evaluations.
📝 Abstract
Membership inference attacks (MIAs) aim to determine whether a data sample was included in a machine learning (ML) model's training set and have become the de facto standard for measuring privacy leakages in ML. We propose an evaluation framework that defines the conditions under which MIAs constitute a genuine privacy threat, and review representative MIAs against it. We find that, under the realistic conditions defined in our framework, MIAs represent weak privacy threats. Thus, relying on them as a privacy metric in ML can lead to an overestimation of risk and to unnecessary sacrifices in model utility as a consequence of employing too strong defenses.