🤖 AI Summary
Large language models (LLMs) may memorize training data rather than generalize, raising concerns regarding privacy leakage, copyright infringement, and reliability of evaluation metrics. To address this, we propose PEARL—the first black-box memory detection framework that requires no model access, training set knowledge, or gradient computation. PEARL quantifies memorization strength by measuring output consistency across input perturbations—including synonym substitution, word-order shuffling, and token-level noise injection—and evaluates consistency via semantic similarity, token overlap, and logical equivalence. The method exhibits cross-model transferability and demonstrates robustness on the Pythia model family. Empirical analysis reveals that GPT-4o exhibits significant verbatim reproduction of Bible passages and HumanEval code snippets; further inference strongly suggests that *The New York Times* content is part of its training corpus. PEARL thus enables scalable, assumption-light auditing of data memorization in closed- and open-weight LLMs.
📝 Abstract
While Large Language Models (LLMs) achieve remarkable performance through training on massive datasets, they can exhibit concerning behaviors such as verbatim reproduction of training data rather than true generalization. This memorization phenomenon raises significant concerns about data privacy, intellectual property rights, and the reliability of model evaluations. This paper introduces PEARL, a novel approach for detecting memorization in LLMs. PEARL assesses how sensitive an LLM's performance is to input perturbations, enabling memorization detection without requiring access to the model's internals. We investigate how input perturbations affect the consistency of outputs, enabling us to distinguish between true generalization and memorization. Our findings, following extensive experiments on the Pythia open model, provide a robust framework for identifying when the model simply regurgitates learned information. Applied on the GPT 4o models, the PEARL framework not only identified cases of memorization of classic texts from the Bible or common code from HumanEval but also demonstrated that it can provide supporting evidence that some data, such as from the New York Times news articles, were likely part of the training data of a given model.