🤖 AI Summary
This work aims to provide a more precise characterization of the generalization error of differentially private learning algorithms. By integrating information-theoretic tools with typicality analysis and leveraging the stability properties inherent to differential privacy, we rigorously refine existing mutual information-based upper bounds and establish, for the first time, a computable and tight upper bound on maximal leakage. The proposed approach applies uniformly to both expected and high-probability analyses of generalization error. The resulting bounds are not only tighter than prior results but also exhibit strong computability, thereby offering enhanced guarantees on generalization performance.
📝 Abstract
We study the generalization error of stochastic learning algorithms from an information-theoretic perspective, with a particular emphasis on deriving sharper bounds for differentially private algorithms. It is well known that the generalization error of stochastic learning algorithms can be bounded in terms of mutual information and maximal leakage, yielding in-expectation and high-probability guarantees, respectively. In this work, we further upper bound mutual information and maximal leakage by explicit, easily computable formulas, using typicality-based arguments and exploiting the stability properties of private algorithms. In the first part of the paper, we strictly improve the mutual-information bounds by Rodr\'iguez-G\'alvez et al. (IEEE Trans. Inf. Theory, 2021). In the second part, we derive new upper bounds on the maximal leakage of learning algorithms. In both cases, the resulting bounds on information measures translate directly into generalization error guarantees.