🤖 AI Summary
This work identifies a fundamental flaw in current evaluation methodologies for membership inference (MI) attacks against foundation models: member and non-member samples are typically drawn from disparate distributions, causing standard metrics—such as AUC—to reflect data distribution shift rather than genuine model memorization or privacy leakage. To address this, the authors propose the first model-agnostic “blind baseline” for MI—namely, zero-knowledge classifiers leveraging text statistics or embedding distances—requiring no access to the target model. They systematically evaluate it across eight public MI benchmark datasets. Results show that this blind baseline consistently achieves significantly higher AUC than state-of-the-art MI attacks on all datasets, with remarkable cross-dataset stability. The study demonstrates that prevailing MI evaluation paradigms primarily capture distributional discrepancies—not true membership information leakage—thereby challenging their validity as privacy assessment tools and providing both theoretical grounding and an empirical benchmark for developing more robust privacy evaluation frameworks.
📝 Abstract
Membership inference (MI) attacks try to determine if a data sample was used to train a machine learning model. For foundation models trained on unknown Web data, MI attacks are often used to detect copyrighted training materials, measure test set contamination, or audit machine unlearning. Unfortunately, we find that evaluations of MI attacks for foundation models are flawed, because they sample members and non-members from different distributions. For 8 published MI evaluation datasets, we show that blind attacks -- that distinguish the member and non-member distributions without looking at any trained model -- outperform state-of-the-art MI attacks. Existing evaluations thus tell us nothing about membership leakage of a foundation model's training data.