🤖 AI Summary
This work addresses the underestimation of personally identifiable information (PII) leakage in large language models (LLMs) caused by single-query attacks on training data. We propose PII-Scope, the first benchmark comprehensively covering realistic threat scenarios—including single- and multi-turn adversarial querying and iterative learning. Our method introduces a prompt-engineering-based attack framework featuring diversity-aware sampling and adaptive demonstration selection, alongside a standardized evaluation protocol comparing PII leakage between pre-trained and fine-tuned models. Experimental results reveal that fine-tuned models exhibit significantly higher PII leakage than their pre-trained counterparts; multi-turn adversarial queries boost extraction rates by up to 5×; and hyperparameters—especially demonstration selection—critically influence attack efficacy. PII-Scope establishes the first realistic, threat-informed benchmark for LLM-PII leakage assessment, enabling more rigorous security evaluation and informing robust defense strategies.
📝 Abstract
In this work, we introduce PII-Scope, a comprehensive benchmark designed to evaluate state-of-the-art methodologies for PII extraction attacks targeting LLMs across diverse threat settings. Our study provides a deeper understanding of these attacks by uncovering several hyperparameters (e.g., demonstration selection) crucial to their effectiveness. Building on this understanding, we extend our study to more realistic attack scenarios, exploring PII attacks that employ advanced adversarial strategies, including repeated and diverse querying, and leveraging iterative learning for continual PII extraction. Through extensive experimentation, our results reveal a notable underestimation of PII leakage in existing single-query attacks. In fact, we show that with sophisticated adversarial capabilities and a limited query budget, PII extraction rates can increase by up to fivefold when targeting the pretrained model. Moreover, we evaluate PII leakage on finetuned models, showing that they are more vulnerable to leakage than pretrained models. Overall, our work establishes a rigorous empirical benchmark for PII extraction attacks in realistic threat scenarios and provides a strong foundation for developing effective mitigation strategies.