🤖 AI Summary
This paper addresses robust nonparametric regression under heavy-tailed noise, where classical generalization error bounds fail under weak moment conditions (e.g., only the (1+ε)-th moment exists) and unbounded hypothesis spaces. To overcome this, we propose the L₂ prediction error—as opposed to conventional robust risk—as the primary learning-theoretic measure. By introducing a *probabilistically effective hypothesis space*, we enable a rigorous bias–variance decomposition and uncover a fundamental robustness–bias trade-off governed by the scale parameter σ in Huber loss. Integrating Tikhonov regularization in RKHS, we establish a comparison theorem linking excess robust risk to L₂ prediction error. Without requiring uniform boundedness of functions, we derive explicit finite-sample error bounds and optimal convergence rates for Huber regression, along with a principled tuning rule for σ. The framework extends naturally to other robust loss functions.
📝 Abstract
We investigate robust nonparametric regression in the presence of heavy-tailed noise, where the hypothesis class may contain unbounded functions and robustness is ensured via a robust loss function $ell_σ$. Using Huber regression as a close-up example within Tikhonov-regularized risk minimization in reproducing kernel Hilbert spaces (RKHS), we address two central challenges: (i) the breakdown of standard concentration tools under weak moment assumptions, and (ii) the analytical difficulties introduced by unbounded hypothesis spaces. Our first message is conceptual: conventional generalization-error bounds for robust losses do not faithfully capture out-of-sample performance. We argue that learnability should instead be quantified through prediction error, namely the $L_2$-distance to the truth $f^star$, which is $σ$-independent and directly reflects the target of robust estimation. To make this workable under unboundedness, we introduce a emph{probabilistic effective hypothesis space} that confines the estimator with high probability and enables a meaningful bias--variance decomposition under weak $(1+ε)$-moment conditions. Technically, we establish new comparison theorems linking the excess robust risk to the $L_2$ prediction error up to a residual of order $mathcal{O}(σ^{-2ε})$, clarifying the robustness--bias trade-off induced by the scale parameter $σ$. Building on this, we derive explicit finite-sample error bounds and convergence rates for Huber regression in RKHS that hold without uniform boundedness and under heavy-tailed noise. Our study delivers principled tuning rules, extends beyond Huber to other robust losses, and highlights prediction error, not excess generalization risk, as the fundamental lens for analyzing robust learning.