SoK: Membership Inference Attacks on LLMs are Rushing Nowhere (and How to Fix It)

📅 2024-06-25
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
Current membership inference attack (MIA) research against large language models (LLMs) suffers from severe methodological flaws: mainstream studies rely on posteriorly constructed datasets, inducing substantial distributional shifts between member and non-member samples—thereby distorting privacy leakage assessments. Method: The authors first quantify such biases across six widely used MIA benchmarks; then propose a reproducible evaluation paradigm comprising four key components—randomized test-splitting, unique sequence injection, random fine-tuning, and posterior control—and establish both sequence-level and document-level MIA benchmarking frameworks. Results: Experiments demonstrate that most reported MIA success stems from dataset construction artifacts rather than genuine model memorization. This work establishes rigorous, trustworthy evaluation principles for LLM MIAs and provides a robust methodological foundation for studying LLM privacy and memorization.

Technology Category

Application Category

📝 Abstract
Whether LLMs memorize their training data and what this means, from measuring privacy leakage to detecting copyright violations, has become a rapidly growing area of research. In the last few months, more than 10 new methods have been proposed to perform Membership Inference Attacks (MIAs) against LLMs. Contrary to traditional MIAs which rely on fixed-but randomized-records or models, these methods are mostly trained and tested on datasets collected post-hoc. Sets of members and non-members, used to evaluate the MIA, are constructed using informed guesses after the release of a model. This lack of randomization raises concerns of a distribution shift between members and non-members. In this work, we first extensively review the literature on MIAs against LLMs and show that, while most work focuses on sequence-level MIAs evaluated in post-hoc setups, a range of target models, motivations and units of interest are considered. We then quantify distribution shifts present in 6 datasets used in the literature using a model-less bag of word classifier and show that all datasets constructed post-hoc suffer from strong distribution shifts. These shifts invalidate the claims of LLMs memorizing strongly in real-world scenarios and, potentially, also the methodological contributions of the recent papers based on these datasets. Yet, all hope might not be lost. We introduce important considerations to properly evaluate MIAs against LLMs and discuss, in turn, potential ways forwards: randomized test splits, injections of randomized (unique) sequences, randomized fine-tuning, and several post-hoc control methods. While each option comes with its advantages and limitations, we believe they collectively provide solid grounds to guide MIA development and study LLM memorization. We conclude with an overview of recommended approaches to benchmark sequence-level and document-level MIAs against LLMs.
Problem

Research questions and friction points this paper is trying to address.

Assessing privacy risks from LLM memorization of training data.
Evaluating effectiveness of Membership Inference Attacks on LLMs.
Addressing distribution shifts in datasets used for MIA evaluations.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Randomized test splits for MIA evaluation
Injections of randomized unique sequences
Randomized fine-tuning and post-hoc controls
🔎 Similar Papers
No similar papers found.