🤖 AI Summary
This work addresses the privacy risks posed by large language models (LLMs) that may memorize and verbatim reproduce personally identifiable information—such as email addresses, phone numbers, and IP addresses—during training. The authors propose R&R, a detection framework combining regular expressions and rule-based heuristics, to quantify verbatim memorization of sensitive data in LLMs at scale for the first time. Using 483 human-annotated samples, the Pythia model family (ranging from 160M to 6.9B parameters), and greedy decoding, they systematically evaluate how model size and training steps influence memorization. Results show that Pythia-6.9B verbatim reproduces 13.6% of the sensitive samples, while even the smallest model exhibits a 2.7% reproduction rate, demonstrating that privacy leakage is pervasive and underscoring the critical need for rigorous filtering and anonymization of pretraining data.
📝 Abstract
Modern language models (LM) are trained on large scrapes of the Web, containing millions of personal information (PI) instances, many of which LMs memorize, increasing privacy risks. In this work, we develop the regexes and rules (R&R) detector suite to detect email addresses, phone numbers, and IP addresses, which outperforms the best regex-based PI detectors. On a manually curated set of 483 instances of PI, we measure memorization: finding that 13.6% are parroted verbatim by the Pythia-6.9b model, i.e., when the model is prompted with the tokens that precede the PI in the original document, greedy decoding generates the entire PI span exactly. We expand this analysis to study models of varying sizes (160M-6.9B) and pretraining time steps (70k-143k iterations) in the Pythia model suite and find that both model size and amount of pretraining are positively correlated with memorization. Even the smallest model, Pythia-160m, parrots 2.7% of the instances exactly. Consequently, we strongly recommend that pretraining datasets be aggressively filtered and anonymized to minimize PI parroting.