Entropy-Memorization Law: Evaluating Memorization Difficulty of Data in LLMs

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the quantification of memorization difficulty for training data in large language models (LLMs). We propose the “entropy–memorization law,” the first empirical finding that establishes a strong linear relationship between the empirical entropy of input sequences and their model memorization scores. Leveraging this law, we develop an efficient dataset inference method capable of accurately distinguishing training from test samples with minimal computational overhead. Our empirical validation employs the open-source OLMo model family, integrating empirical entropy estimation and memorization score evaluation under controlled-variable experiments to confirm the law’s robustness across architectures and data regimes. A key insight is that seemingly random “nonsensical strings” exhibit low empirical entropy—contradicting conventional intuitions about randomness—and thereby enable novel approaches to data provenance and privacy leakage analysis. The entropy–memorization law combines theoretical elegance with practical efficacy, consistently outperforming existing baselines across diverse experimental settings.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are known to memorize portions of their training data, sometimes reproducing content verbatim when prompted appropriately. In this work, we investigate a fundamental yet under-explored question in the domain of memorization: How to characterize memorization difficulty of training data in LLMs? Through empirical experiments on OLMo, a family of open models, we present the Entropy-Memorization Law. It suggests that data entropy is linearly correlated with memorization score. Moreover, in a case study of memorizing highly randomized strings, or "gibberish", we observe that such sequences, despite their apparent randomness, exhibit unexpectedly low empirical entropy compared to the broader training corpus. Adopting the same strategy to discover Entropy-Memorization Law, we derive a simple yet effective approach to distinguish training and testing data, enabling Dataset Inference (DI).
Problem

Research questions and friction points this paper is trying to address.

Characterize memorization difficulty of training data in LLMs
Investigate linear correlation between data entropy and memorization score
Develop approach to distinguish training and testing data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes Entropy-Memorization Law for LLMs
Links data entropy to memorization scores
Enables Dataset Inference via entropy analysis
🔎 Similar Papers
No similar papers found.