PII-Scope: A Benchmark for Training Data PII Leakage Assessment in LLMs

📅 2024-10-09
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the underestimation of personally identifiable information (PII) leakage in large language models (LLMs) caused by single-query attacks on training data. We propose PII-Scope, the first benchmark comprehensively covering realistic threat scenarios—including single- and multi-turn adversarial querying and iterative learning. Our method introduces a prompt-engineering-based attack framework featuring diversity-aware sampling and adaptive demonstration selection, alongside a standardized evaluation protocol comparing PII leakage between pre-trained and fine-tuned models. Experimental results reveal that fine-tuned models exhibit significantly higher PII leakage than their pre-trained counterparts; multi-turn adversarial queries boost extraction rates by up to 5×; and hyperparameters—especially demonstration selection—critically influence attack efficacy. PII-Scope establishes the first realistic, threat-informed benchmark for LLM-PII leakage assessment, enabling more rigorous security evaluation and informing robust defense strategies.

Technology Category

Application Category

📝 Abstract
In this work, we introduce PII-Scope, a comprehensive benchmark designed to evaluate state-of-the-art methodologies for PII extraction attacks targeting LLMs across diverse threat settings. Our study provides a deeper understanding of these attacks by uncovering several hyperparameters (e.g., demonstration selection) crucial to their effectiveness. Building on this understanding, we extend our study to more realistic attack scenarios, exploring PII attacks that employ advanced adversarial strategies, including repeated and diverse querying, and leveraging iterative learning for continual PII extraction. Through extensive experimentation, our results reveal a notable underestimation of PII leakage in existing single-query attacks. In fact, we show that with sophisticated adversarial capabilities and a limited query budget, PII extraction rates can increase by up to fivefold when targeting the pretrained model. Moreover, we evaluate PII leakage on finetuned models, showing that they are more vulnerable to leakage than pretrained models. Overall, our work establishes a rigorous empirical benchmark for PII extraction attacks in realistic threat scenarios and provides a strong foundation for developing effective mitigation strategies.
Problem

Research questions and friction points this paper is trying to address.

Evaluating PII extraction attacks in LLMs across diverse threats
Assessing PII leakage underestimation in single-query attacks
Comparing vulnerability of pretrained vs finetuned models to PII leaks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark for PII extraction attacks evaluation
Advanced adversarial strategies for PII leakage
Empirical study on pretrained and finetuned models
🔎 Similar Papers
No similar papers found.
K
K. K. Nakka
Huawei Munich Research Center, Germany
Ahmed Frikha
Ahmed Frikha
Cerebras Systems Inc.
Generative MLDomain GeneralizationContinual LearningMultimodal LearningPrivacy-Preserving ML
Ricardo Mendes
Ricardo Mendes
Huawei Technologies Düsseldorf GmbH
Privacy-Preserving AILocation PrivacyUbiquitous Computing
X
Xue Jiang
Huawei Munich Research Center, Germany
X
Xuebing Zhou
Huawei Munich Research Center, Germany