Statistical MIA: Rethinking Membership Inference Attack for Reliable Unlearning Auditing

πŸ“… 2026-02-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the unreliability of existing membership inference attack (MIA)-based machine unlearning audit methods, which conflate MIA failure with genuine forgetting and suffer from unobservable statistical errors and high computational costs. To overcome these limitations, the paper proposes a training-free Statistical Membership Inference Attack framework (SMIA) that abandons the conventional shadow-model paradigm and, for the first time, introduces statistical hypothesis testing into unlearning audits. By directly comparing the distributional differences between member and non-member data, SMIA provides a quantifiable unlearning rate along with a confidence interval. The method offers both theoretical guarantees and computational efficiency, significantly reducing overhead while enabling more reliable and interpretable evaluation of unlearning efficacy, thereby establishing a novel auditing paradigm.

Technology Category

Application Category

πŸ“ Abstract
Machine unlearning (MU) is essential for enforcing the right to be forgotten in machine learning systems. A key challenge of MU is how to reliably audit whether a model has truly forgotten specified training data. Membership Inference Attacks (MIAs) are widely used for unlearning auditing, where samples that evade membership detection are often regarded as successfully forgotten. After carefully revisiting the reliability of MIA, we show that this assumption is flawed: failed membership inference does not imply true forgetting. We theoretically demonstrate that MIA-based auditing, when formulated as a binary classification problem, inevitably incurs statistical errors whose magnitude cannot be observed during the auditing process. This leads to overly optimistic evaluations of unlearning performance, while incurring substantial computational overhead due to shadow model training. To address these limitations, we propose Statistical Membership Inference Attack (SMIA), a novel training-free and highly effective auditing framework. SMIA directly compares the distributions of member and non-member data using statistical tests, eliminating the need for learned attack models. Moreover, SMIA outputs both a forgetting rate and a corresponding confidence interval, enabling quantified reliability of the auditing results. Extensive experiments show that SMIA provides more reliable auditing with significantly lower computational cost than existing MIA-based approaches. Notably, the theoretical guarantees and empirical effectiveness of SMIA suggest it as a new paradigm for reliable machine unlearning auditing.
Problem

Research questions and friction points this paper is trying to address.

Machine Unlearning
Membership Inference Attack
Unlearning Auditing
Statistical Reliability
Forgetting Verification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Statistical Membership Inference Attack
Machine Unlearning Auditing
Training-free Auditing
Confidence Interval
Statistical Hypothesis Testing
πŸ”Ž Similar Papers
No similar papers found.
J
Jialong Sun
Shenzhen University of Advanced Technology
Zeming Wei
Zeming Wei
Ph.D. Candidate, Peking University
Trustworthy AIAdversarial RobustnessExplainability
J
Jiaxuan Zou
Xi’an Jiaotong University
J
Jiacheng Gong
Heilongjiang University
G
Guanheng Wang
Heilongjiang University
C
Chengyang Dong
Shenzhen University of Advanced Technology
Jialong Li
Jialong Li
Waseda University
self-adaptive systemsrequirement engineeringhuman-in-the-loop
B
Bo Liu
Shenzhen University of Advanced Technology