Trade-offs in Data Memorization via Strong Data Processing Inequalities

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the fundamental trade-off between data memorization and generalization in large language model training, with a focus on privacy risks from sensitive data leakage. We propose the first analytical framework linking strong data processing inequalities (SDPIs) to theoretical measures of data memorization. We rigorously prove that, in the small-sample regime, any accurate model must memorize at least Ω(d) bits of training data—where d is the effective dimension—and this lower bound is tight up to logarithmic factors, extending to mixture-of-clusters models. By integrating the information bottleneck principle, binary classification theory, and probabilistic modeling, we derive a universal decay relationship between sample size and memorization. Moreover, we design a simple algorithm that tightly achieves this bound. Our results overcome key limitations of Brown et al. (2021), including overly restrictive model assumptions and loose lower bounds.

Technology Category

Application Category

📝 Abstract
Recent research demonstrated that training large language models involves memorization of a significant fraction of training data. Such memorization can lead to privacy violations when training on sensitive user data and thus motivates the study of data memorization's role in learning. In this work, we develop a general approach for proving lower bounds on excess data memorization, that relies on a new connection between strong data processing inequalities and data memorization. We then demonstrate that several simple and natural binary classification problems exhibit a trade-off between the number of samples available to a learning algorithm, and the amount of information about the training data that a learning algorithm needs to memorize to be accurate. In particular, $Omega(d)$ bits of information about the training data need to be memorized when $O(1)$ $d$-dimensional examples are available, which then decays as the number of examples grows at a problem-specific rate. Further, our lower bounds are generally matched (up to logarithmic factors) by simple learning algorithms. We also extend our lower bounds to more general mixture-of-clusters models. Our definitions and results build on the work of Brown et al. (2021) and address several limitations of the lower bounds in their work.
Problem

Research questions and friction points this paper is trying to address.

Studying trade-offs between data memorization and learning efficiency
Proving lower bounds on excess memorization via data processing inequalities
Analyzing memorization requirements in binary classification and mixture models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Strong data processing inequalities link memorization
Lower bounds on data memorization via binary classification
Trade-off between sample size and memorization requirements
🔎 Similar Papers
No similar papers found.