PII-Compass: Guiding LLM training data extraction prompts towards the target PII via grounding

📅 2024-07-03
🏛️ PRIVATENLP
📈 Citations: 12
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) pose privacy risks through extraction of personally identifiable information (PII) memorized from training data; however, existing evaluation methods—relying on generic, context-agnostic prompts—severely underestimate real-world attack success rates. To address this, we propose *domain-semantic anchoring*: grounding PII extraction prompts with domain-specific data to enhance their semantic relevance and effectiveness. Our empirical study is the first to demonstrate that this method increases PII extraction success by over an order of magnitude. We further introduce a red-teaming evaluation paradigm that better approximates realistic adversarial behavior, substantially correcting prior underestimations of privacy risk. In experiments, single-query PII extraction reaches 0.92%; success rises to 3.9% after 128 queries and 6.86% after 2,308 queries—equivalent to successfully extracting PII from approximately 1 in 15 individuals. This work establishes a more credible quantitative benchmark for LLM data memorization privacy risks and informs practical mitigation strategies.

Technology Category

Application Category

📝 Abstract
The latest and most impactful advances in large models stem from their increased size. Unfortunately, this translates into an improved memorization capacity, raising data privacy concerns. Specifically, it has been shown that models can output personal identifiable information (PII) contained in their training data. However, reported PII extraction performance varies widely, and there is no consensus on the optimal methodology to evaluate this risk, resulting in underestimating realistic adversaries. In this work, we empirically demonstrate that it is possible to improve the extractability of PII by over ten-fold by grounding the prefix of the manually constructed extraction prompt with in-domain data. This approach achieves phone number extraction rates of 0.92%, 3.9%, and 6.86% with 1, 128, and 2308 queries, respectively, i.e., the phone number of 1 person in 15 is extractable.
Problem

Research questions and friction points this paper is trying to address.

Enhancing PII extraction from LLMs via grounded prompts
Addressing privacy risks from training data memorization
Evaluating optimal methodology for PII extraction assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Grounding extraction prompts with in-domain data
Improving PII extractability by over ten-fold
Achieving high phone number extraction rates