Tighter CMI-Based Generalization Bounds via Stochastic Projection and Quantization

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses two fundamental questions in statistical learning: (i) the looseness of existing conditional mutual information (CMI)-based generalization bounds, and (ii) whether “data memorization” is necessary for good generalization. We propose a novel CMI analysis framework grounded in random projection and lossy quantization, overcoming the failure of conventional CMI bounds in high-dimensional or structured settings. Theoretically, we prove that memorizing most training samples is *not* necessary for generalization; furthermore, we construct an auxiliary algorithm achieving an optimal $O(1/sqrt{n})$ generalization bound without significant memorization. This constitutes the first information-theoretic characterization of the intrinsic relationship between generalization and memorization. Crucially, our framework preserves generalization performance while substantially reducing memory reliance on training data—providing both theoretical foundations and new analytical tools for designing efficient, low-memory learning algorithms.

Technology Category

Application Category

📝 Abstract
In this paper, we leverage stochastic projection and lossy compression to establish new conditional mutual information (CMI) bounds on the generalization error of statistical learning algorithms. It is shown that these bounds are generally tighter than the existing ones. In particular, we prove that for certain problem instances for which existing MI and CMI bounds were recently shown in Attias et al. [2024] and Livni [2023] to become vacuous or fail to describe the right generalization behavior, our bounds yield suitable generalization guarantees of the order of $mathcal{O}(1/sqrt{n})$, where $n$ is the size of the training dataset. Furthermore, we use our bounds to investigate the problem of data "memorization" raised in those works, and which asserts that there are learning problem instances for which any learning algorithm that has good prediction there exist distributions under which the algorithm must "memorize" a big fraction of the training dataset. We show that for every learning algorithm, there exists an auxiliary algorithm that does not memorize and which yields comparable generalization error for any data distribution. In part, this shows that memorization is not necessary for good generalization.
Problem

Research questions and friction points this paper is trying to address.

Establish tighter CMI bounds for generalization error
Address data memorization issues in learning algorithms
Prove memorization is unnecessary for good generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stochastic projection and lossy compression techniques
Tighter conditional mutual information generalization bounds
Auxiliary algorithm achieving generalization without memorization
🔎 Similar Papers
No similar papers found.
Milad Sefidgaran
Milad Sefidgaran
Senior ML Researcher
Machine LearningDeep LearningInformation Theory
Kimia Nadjahi
Kimia Nadjahi
CNRS - ENS Paris
A
Abdellatif Zaidi
Paris Research Center, Huawei Technologies France, Université Gustave Eiffel, France