Optimal Learning via Moderate Deviations Theory

📅 2023-05-23
📈 Citations: 3
Influential: 1
📄 PDF
🤖 AI Summary
This paper addresses the construction of confidence intervals for function-value learning in nonparametric estimation and stochastic programming—including SDE-based models. We systematically introduce the moderate deviations principle for the first time to develop high-precision confidence intervals that simultaneously achieve statistical optimality and computational tractability. The proposed method is theoretically optimal under multiple criteria: exponential accuracy, minimality, consistency, controlled misrepresentation probability, and uniform most accurate (UMA) inference. Crucially, the resulting confidence intervals admit a unified robust optimization formulation and—under broad model conditions—are exactly equivalent to finite-dimensional convex programs, thereby substantially improving both statistical efficiency and computational solvability. This work establishes a novel paradigm for nonparametric inference in complex stochastic systems.
📝 Abstract
This paper proposes a statistically optimal approach for learning a function value using a confidence interval in a wide range of models, including general non-parametric estimation of an expected loss described as a stochastic programming problem or various SDE models. More precisely, we develop a systematic construction of highly accurate confidence intervals by using a moderate deviation principle-based approach. It is shown that the proposed confidence intervals are statistically optimal in the sense that they satisfy criteria regarding exponential accuracy, minimality, consistency, mischaracterization probability, and eventual uniformly most accurate (UMA) property. The confidence intervals suggested by this approach are expressed as solutions to robust optimization problems, where the uncertainty is expressed via the underlying moderate deviation rate function induced by the data-generating process. We demonstrate that for many models these optimization problems admit tractable reformulations as finite convex programs even when they are infinite-dimensional.
Problem

Research questions and friction points this paper is trying to address.

Constructing statistically optimal confidence intervals for function learning
Developing moderate deviation principle-based accurate confidence intervals
Solving robust optimization problems with tractable convex reformulations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Moderate deviation principle-based confidence intervals
Robust optimization with rate function uncertainty
Tractable convex reformulations for infinite-dimensional problems
🔎 Similar Papers
No similar papers found.