On Regression in Extreme Regions

📅 2023-03-06
🏛️ arXiv.org
📈 Citations: 5
Influential: 1
📄 PDF
🤖 AI Summary
This paper addresses the challenges of extrapolation and out-of-distribution generalization in continuous regression under covariate tail regimes—particularly for extreme large values. Methodologically, it introduces the notion of *asymptotic extreme risk* and constructs a non-asymptotic estimator relying solely on the top-scoring samples; integrates regular variation theory, extreme-value statistics, and empirical risk minimization (ERM); and designs a tail-truncated, quantile-driven loss function. Theoretical guarantees are established within the framework of VC-class nonparametric least-squares regression. Key contributions include: (i) the first rigorous theoretical foundation for prediction consistency under extreme covariate inputs; and (ii) an estimator that circumvents the failure of standard ERM in sparse tail regions. Numerical experiments demonstrate substantial improvements in predictive accuracy for extreme-X regions, confirming both theoretical soundness and practical efficacy.
📝 Abstract
The statistical learning problem consists in building a predictive function $hat{f}$ based on independent copies of $(X,Y)$ so that $Y$ is approximated by $hat{f}(X)$ with minimum (squared) error. Motivated by various applications, special attention is paid here to the case of extreme (i.e. very large) observations $X$. Because of their rarity, the contributions of such observations to the (empirical) error is negligible, and the predictive performance of empirical risk minimizers can be consequently very poor in extreme regions. In this paper, we develop a general framework for regression on extremes. Under appropriate regular variation assumptions regarding the pair $(X,Y)$, we show that an asymptotic notion of risk can be tailored to summarize appropriately predictive performance in extreme regions. It is also proved that minimization of an empirical and nonasymptotic version of this 'extreme risk', based on a fraction of the largest observations solely, yields good generalization capacity. In addition, numerical results providing strong empirical evidence of the relevance of the approach proposed are displayed.
Problem

Research questions and friction points this paper is trying to address.

Establishes framework for extrapolation on unobserved covariate tails
Performs regression on subsample with extreme continuous labels
Quantifies predictive performance via excess risk bounds
Innovation

Methods, ideas, or system contributions that make the work stand out.

Statistical regression on extreme subsample observations
Focusing on angular components for extrapolation
Using multivariate regular variation theory foundation
🔎 Similar Papers
No similar papers found.
N
Nathan Huet
LTCI, Telecom Paris, Institut Polytechnique de Paris
S
S. Clémençon
LTCI, Telecom Paris, Institut Polytechnique de Paris
Anne Sabourin
Anne Sabourin
Université Paris Cité, CNRS, MAP5, F-75006 Paris, France
statisticsextreme value theorystatistical learning