🤖 AI Summary
This paper addresses the degradation of convergence rates in square-loss statistical learning under β-mixing dependent data, caused by sample dependence. We propose a weak sub-Gaussian function class framework combined with a mixed-tail generalization chaining technique. Without realizability assumptions and requiring only comparability between the $L^2$ and $Psi_p$ topologies, we establish, for the first time, an *almost mixing-independent* rate: the dominant convergence term is entirely independent of the mixing time, while mixing effects appear only in higher-order terms. The resulting non-asymptotic bound is instance-optimal and sharp. Technically, our analysis integrates $Psi_p$-norm techniques, generalization chaining, and topological comparison arguments. Tight convergence rates are derived for canonical settings—including sub-Gaussian linear regression, smooth parametric models, and finite hypothesis classes—where the leading term depends solely on the complexity of the function class and second-order statistics.
📝 Abstract
In this work, we study statistical learning with dependent ($eta$-mixing) data and square loss in a hypothesis class $mathscr{F}subset L_{Psi_p}$ where $Psi_p$ is the norm $|f|_{Psi_p} riangleq sup_{mgeq 1} m^{-1/p} |f|_{L^m} $ for some $pin [2,infty]$. Our inquiry is motivated by the search for a sharp noise interaction term, or variance proxy, in learning with dependent data. Absent any realizability assumption, typical non-asymptotic results exhibit variance proxies that are deflated multiplicatively by the mixing time of the underlying covariates process. We show that whenever the topologies of $L^2$ and $Psi_p$ are comparable on our hypothesis class $mathscr{F}$ -- that is, $mathscr{F}$ is a weakly sub-Gaussian class: $|f|_{Psi_p} lesssim |f|_{L^2}^eta$ for some $etain (0,1]$ -- the empirical risk minimizer achieves a rate that only depends on the complexity of the class and second order statistics in its leading term. Our result holds whether the problem is realizable or not and we refer to this as a emph{near mixing-free rate}, since direct dependence on mixing is relegated to an additive higher order term. We arrive at our result by combining the above notion of a weakly sub-Gaussian class with mixed tail generic chaining. This combination allows us to compute sharp, instance-optimal rates for a wide range of problems. Examples that satisfy our framework include sub-Gaussian linear regression, more general smoothly parameterized function classes, finite hypothesis classes, and bounded smoothness classes.