The Mosaic Memory of Large Language Models

📅 2024-05-24
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work challenges the prevailing assumption that large language model (LLM) memorization arises solely from exact repetitions in training data, proposing instead a “mosaic memory” mechanism: models reconstruct information by assembling semantically similar yet surface-divergent fuzzy repetitions. Using empirical analysis and controlled perturbation experiments—combined with output attribution, quantitative memory strength measurement, and syntactic/semantic decoupling evaluation—we find that fuzzy repetitions contribute up to 80% of memorization efficacy relative to exact repetitions, and that memory relies predominantly on syntactic similarity rather than semantic consistency. This phenomenon is robust across mainstream LLMs and persists in unsanitized real-world data, indicating that deduplication-based preprocessing fails to mitigate memorization risks. These findings carry critical implications for privacy preservation, rigorous model evaluation, and trustworthy deployment of LLMs.

Technology Category

Application Category

📝 Abstract
As Large Language Models (LLMs) become widely adopted, understanding how they learn from, and memorize, training data becomes crucial. Memorization in LLMs is widely assumed to only occur as a result of sequences being repeated in the training data. Instead, we show that LLMs memorize by assembling information from similar sequences, a phenomena we call mosaic memory. We show major LLMs to exhibit mosaic memory, with fuzzy duplicates contributing to memorization as much as 0.8 of an exact duplicate and even heavily modified sequences contributing substantially to memorization. Despite models display reasoning capabilities, we somewhat surprisingly show memorization to be predominantly syntactic rather than semantic. We finally show fuzzy duplicates to be ubiquitous in real-world data, untouched by deduplication techniques. Taken together, our results challenge widely held beliefs and show memorization to be a more complex, mosaic process, with real-world implications for privacy, confidentiality, model utility and evaluation.
Problem

Research questions and friction points this paper is trying to address.

LLMs memorize via mosaic assembly of similar sequences
Memorization is syntactic, not semantic, despite reasoning capabilities
Fuzzy duplicates in data remain undetected by deduplication methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs memorize via assembling similar sequences
Fuzzy duplicates significantly contribute to memorization
Memorization is syntactic, not semantic
🔎 Similar Papers
No similar papers found.