🤖 AI Summary
Large language models (LLMs) inadvertently memorize and leak personal information, hindering compliance with the GDPR’s right to be forgotten (RTBF).
Method: We propose the first individual-granularity privacy auditing framework, introducing WikiMem—a dataset of 5,000+ natural-language memory probes—generated via Wikidata attributes, counterfactual prompt rewriting, and calibration of negative log-likelihood. Our model-agnostic metrics quantify factual recall at the person–fact association level.
Contribution/Results: We achieve the first precise identification and quantification of individual–fact memory associations; empirically validate, across 15 mainstream LLMs, strong correlations between memory strength and both subject-network existence and model scale; and establish a scalable, verifiable foundation for machine unlearning and dynamic RTBF enforcement at the individual level.
📝 Abstract
Large Language Models (LLMs) can memorize and reveal personal information, raising concerns regarding compliance with the EU's GDPR, particularly the Right to Be Forgotten (RTBF). Existing machine unlearning methods assume the data to forget is already known but do not address how to identify which individual-fact associations are stored in the model. Privacy auditing techniques typically operate at the population level or target a small set of identifiers, limiting applicability to individual-level data inquiries. We introduce WikiMem, a dataset of over 5,000 natural language canaries covering 243 human-related properties from Wikidata, and a model-agnostic metric to quantify human-fact associations in LLMs. Our approach ranks ground-truth values against counterfactuals using calibrated negative log-likelihood across paraphrased prompts. We evaluate 200 individuals across 15 LLMs (410M-70B parameters), showing that memorization correlates with subject web presence and model scale. We provide a foundation for identifying memorized personal data in LLMs at the individual level, enabling the dynamic construction of forget sets for machine unlearning and RTBF requests.