Sparsifying Suprema of Gaussian Processes

📅 2024-11-22
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the dimension-independent sparse approximation of suprema of Gaussian processes: given a bounded set of vectors, how to select a subset whose size depends only on the accuracy ε (not on the ambient dimension d or the set size n), such that the supremum over this subset—augmented with a bias term—ε-approximates the original process’s supremum. Methodologically, the work integrates Gaussian process theory, random projection, L¹-approximation, and geometric functional analysis to devise a novel sparse construction framework. Key contributions include: (i) the first construction of an ε-sparser of size O_ε(1) independent of dimension; (ii) a “junta-type” structural theorem for norms; and (iii) a proof that any intersection of r halfspaces admits an ε-approximation using only O_{r,ε}(1) halfspaces. These results enable polynomial-time, distribution-free learning algorithms and support robustness and tolerance testing.

Technology Category

Application Category

📝 Abstract
We give a dimension-independent sparsification result for suprema of centered Gaussian processes: Let $T$ be any (possibly infinite) bounded set of vectors in $mathbb{R}^n$, and let ${oldsymbol{X}_t := t cdot oldsymbol{g} }_{tin T}$ be the canonical Gaussian process on $T$, where $oldsymbol{g}sim N(0, I_n)$. We show that there is an $O_varepsilon(1)$-size subset $S subseteq T$ and a set of real values ${c_s}_{s in S}$ such that the random variable $sup_{s in S} {{oldsymbol{X}}_s + c_s}$ is an $varepsilon$-approximator,(in $L^1$) of the random variable $sup_{t in T} {oldsymbol{X}}_t$. Notably, the size of the sparsifier $S$ is completely independent of both $|T|$ and the ambient dimension $n$. We give two applications of this sparsification theorem: - A"Junta Theorem"for Norms: We show that given any norm $ u(x)$ on $mathbb{R}^n$, there is another norm $psi(x)$ depending only on the projection of $x$ onto $O_varepsilon(1)$ directions, for which $psi({oldsymbol{g}})$ is a multiplicative $(1 pm varepsilon)$-approximation of $ u({oldsymbol{g}})$ with probability $1-varepsilon$ for ${oldsymbol{g}} sim N(0,I_n)$. - Sparsification of Convex Sets: We show that any intersection of (possibly infinitely many) halfspaces in $mathbb{R}^n$ that are at distance $r$ from the origin is $varepsilon$-close (under $N(0,I_n)$) to an intersection of only $O_{r,varepsilon}(1)$ halfspaces. This yields new polynomial-time emph{agnostic learning} and emph{tolerant property testing} algorithms for intersections of halfspaces.
Problem

Research questions and friction points this paper is trying to address.

Sparsifying suprema of Gaussian processes with dimension-independent subsets
Developing junta theorems for approximating norms with few directions
Sparsifying intersections of halfspaces for efficient learning algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dimension-independent sparsification for Gaussian process suprema
Constructs small subset approximating supremum with L1 error
Applications include junta theorems and convex set sparsification
🔎 Similar Papers
No similar papers found.