๐ค AI Summary
Existing methods for multi-output kernel regression struggle to provide tight, distribution-free uncertainty bounds that are readily applicable to downstream tasks. This work proposes the first computationally efficient, distribution-free, and tight uncertainty bound tailored to multi-output settings. By leveraging duality theory, the approach formulates an unconstrained optimization framework whose structure mirrors that of Gaussian process confidence bounds, thereby facilitating seamless integration into control and optimization pipelines. The method unifies and generalizes prior results limited to single-output scenarios. Empirical validation on quadrotor dynamics learning demonstrates its superiority over existing approaches, which are either overly conservative or constrained in scope, yielding significantly tighter and more practical uncertainty quantification.
๐ Abstract
Non-conservative uncertainty bounds are essential for making reliable predictions about latent functions from noisy data--and thus, a key enabler for safe learning-based control. In this domain, kernel methods such as Gaussian process regression are established techniques, thanks to their inherent uncertainty quantification mechanism. Still, existing bounds either pose strong assumptions on the underlying noise distribution, are conservative, do not scale well in the multi-output case, or are difficult to integrate into downstream tasks. This paper addresses these limitations by presenting a tight, distribution-free bound for multi-output kernel-based estimates. It is obtained through an unconstrained, duality-based formulation, which shares the same structure of classic Gaussian process confidence bounds and can thus be straightforwardly integrated into downstream optimization pipelines. We show that the proposed bound generalizes many existing results and illustrate its application using an example inspired by quadrotor dynamics learning.