🤖 AI Summary
Existing generalization error analysis frameworks struggle to accommodate general discrete-time Markov optimization algorithms—such as SGD variants—due to their inherent non-i.i.d. and non-stationary dynamics.
Method: We propose a unified analytical framework based on Poissonized continuous-time approximation. We introduce, for the first time, a Poissonization paradigm applicable to arbitrary Markov algorithms; construct the first entropy flow rigorously corresponding to discrete-time algorithms; and establish its fundamental connection to the modified logarithmic Sobolev inequality (MLSI), enabling unified modeling of both stochastic and deterministic algorithms.
Contributions: (i) A novel PAC-Bayesian generalization bound; (ii) Poissonized reformulations of existing bounds and several new, tighter bounds; (iii) Explicit sufficient conditions under which common algorithms satisfy MLSI, substantially improving bound tightness and broadening applicability across discrete-time Markov optimization.
📝 Abstract
Using continuous-time stochastic differential equation (SDE) proxies to stochastic optimization algorithms has proven fruitful for understanding their generalization abilities. A significant part of these approaches are based on the so-called ``entropy flows'', which greatly simplify the generalization analysis. Unfortunately, such well-structured entropy flows cannot be obtained for most discrete-time algorithms, and the existing SDE approaches remain limited to specific noise and algorithmic structures. We aim to alleviate this issue by introducing a generic framework for analyzing the generalization error of Markov algorithms through `Poissonization', a continuous-time approximation of discrete-time processes with formal approximation guarantees. Through this approach, we first develop a novel entropy flow, which directly leads to PAC-Bayesian generalization bounds. We then draw novel links to modified versions of the celebrated logarithmic Sobolev inequalities (LSI), identify cases where such LSIs are satisfied, and obtain improved bounds. Beyond its generality, our framework allows exploiting specific properties of learning algorithms. In particular, we incorporate the noise structure of different algorithm types - namely, those with additional noise injections (noisy) and those without (non-noisy) - through various technical tools. This illustrates the capacity of our methods to achieve known (yet, Poissonized) and new generalization bounds.