Private Memorization Editing: Turning Memorization into a Defense to Strengthen Data Privacy in Large Language Models

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the privacy risk wherein large language models (LLMs) inadvertently memorize and leak personally identifiable information (PII), this paper proposes a paradigm shift: transforming model memory from a “source of risk” into a “resource for defense.” Methodologically, we introduce the first end-to-end memory manipulation framework—leveraging gradient-based analysis to precisely localize PII-associated memory traces, performing parameter-level knowledge editing, and validating efficacy via closed-loop evaluation under privacy attacks. Our core contribution is the conceptual and technical realization of “memory-as-defense,” transcending conventional mitigation strategies reliant solely on data sanitization or full-model fine-tuning. Experiments across diverse configurations demonstrate substantial reductions in PII leakage; in several settings, adversarial PII extraction accuracy drops to zero, while language modeling performance remains fully preserved.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) memorize, and thus, among huge amounts of uncontrolled data, may memorize Personally Identifiable Information (PII), which should not be stored and, consequently, not leaked. In this paper, we introduce Private Memorization Editing (PME), an approach for preventing private data leakage that turns an apparent limitation, that is, the LLMs' memorization ability, into a powerful privacy defense strategy. While attacks against LLMs have been performed exploiting previous knowledge regarding their training data, our approach aims to exploit the same kind of knowledge in order to make a model more robust. We detect a memorized PII and then mitigate the memorization of PII by editing a model knowledge of its training data. We verify that our procedure does not affect the underlying language model while making it more robust against privacy Training Data Extraction attacks. We demonstrate that PME can effectively reduce the number of leaked PII in a number of configurations, in some cases even reducing the accuracy of the privacy attacks to zero.
Problem

Research questions and friction points this paper is trying to address.

Prevent private data leakage in LLMs by editing memorized PII
Transform LLM memorization into a privacy defense strategy
Reduce accuracy of privacy attacks on training data extraction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Detect and mitigate memorized PII in LLMs
Edit model knowledge to enhance privacy
Reduce PII leakage without affecting performance
🔎 Similar Papers
No similar papers found.
E
Elena Sofia Ruzzetti
Human Centric ART, University of Rome Tor Vergata, Italy
G
Giancarlo A. Xompero
Human Centric ART, University of Rome Tor Vergata, Italy, Almawave S.p.A., Rome, Italy
D
Davide Venditti
Human Centric ART, University of Rome Tor Vergata, Italy
Fabio Massimo Zanzotto
Fabio Massimo Zanzotto
Associate Professor, University of Rome "Tor Vergata"
Artificial IntelligenceNatural Language ProcessingMachine Learning