🤖 AI Summary
Large language models (LLMs) risk memorizing sensitive training data, rendering them vulnerable to membership inference attacks (MIAs); however, existing privacy evaluations suffer from non-public benchmarks and inconsistent methodologies, leading to divergent conclusions. To address this, we propose PerProb—a label-free, model-agnostic, and task-agnostic framework for indirect memory assessment. PerProb quantifies training data memorization by measuring perplexity and mean log-probability deviation between outputs generated by the target (victim) model and a counterfactual (adversarial) model, supporting both black-box and white-box settings. We systematically categorize MIAs into four distinct attack patterns and empirically reveal heterogeneous privacy risks across mainstream LLMs. Extensive evaluation across five benchmark datasets validates PerProb’s effectiveness, and we further demonstrate that mitigation techniques—such as differential privacy—significantly reduce memorization-induced leakage.
📝 Abstract
The rapid advancement of Large Language Models (LLMs) has been driven by extensive datasets that may contain sensitive information, raising serious privacy concerns. One notable threat is the Membership Inference Attack (MIA), where adversaries infer whether a specific sample was used in model training. However, the true impact of MIA on LLMs remains unclear due to inconsistent findings and the lack of standardized evaluation methods, further complicated by the undisclosed nature of many LLM training sets. To address these limitations, we propose PerProb, a unified, label-free framework for indirectly assessing LLM memorization vulnerabilities. PerProb evaluates changes in perplexity and average log probability between data generated by victim and adversary models, enabling an indirect estimation of training-induced memory. Compared with prior MIA methods that rely on member/non-member labels or internal access, PerProb is independent of model and task, and applicable in both black-box and white-box settings. Through a systematic classification of MIA into four attack patterns, we evaluate PerProb's effectiveness across five datasets, revealing varying memory behaviors and privacy risks among LLMs. Additionally, we assess mitigation strategies, including knowledge distillation, early stopping, and differential privacy, demonstrating their effectiveness in reducing data leakage. Our findings offer a practical and generalizable framework for evaluating and improving LLM privacy.