Hypothesis Selection: A High Probability Conundrum

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the hypothesis selection problem: given i.i.d. samples from an unknown distribution and a set of candidate distributions, the goal is to select, with high probability, a hypothesis whose total variation (TV) distance to the true distribution is at most $C cdot mathrm{OPT} + varepsilon$, where $mathrm{OPT}$ is the minimum TV distance achievable within the candidate set. We present the first algorithm achieving the optimal approximation factor $C = 3$, optimal sample complexity, and near-linear running time $ ilde{O}(n/(deltavarepsilon^2))$, significantly improving upon prior work. Furthermore, under settings where $mathrm{OPT}$ is known or preprocessing is allowed, we design more efficient algorithms with only weak dependence on the error and confidence parameters. Technically, our approach integrates precise TV distance estimation, refined probabilistic analysis, sampling optimization, and a novel high-probability error control mechanism—thereby resolving, for the first time, the long-standing open problem of simultaneously attaining optimal approximation factor and optimal runtime.

Technology Category

Application Category

📝 Abstract
In the hypothesis selection problem, we are given a finite set of candidate distributions (hypotheses), $mathcal{H} = {H_1, ldots, H_n}$, and samples from an unknown distribution $P$. Our goal is to find a hypothesis $H_i$ whose total variation distance to $P$ is comparable to that of the nearest hypothesis in $mathcal{H}$. If the minimum distance is $mathsf{OPT}$, we aim to output an $H_i$ such that, with probability at least $1-δ$, its total variation distance to $P$ is at most $C cdot mathsf{OPT} + varepsilon$. Despite decades of work, key aspects of this problem remain unresolved, including the optimal running time for algorithms that achieve the optimal sample complexity and best possible approximation factor of $C=3$. The previous state-of-the-art result [Aliakbarpour, Bun, Smith, NeurIPS 2024] provided a nearly linear in $n$ time algorithm but with a sub-optimal dependence on the other parameters, running in $ ilde{O}(n/(δ^3varepsilon^3))$ time. We improve this time complexity to $ ilde{O}(n/(δvarepsilon^2))$, significantly reducing the dependence on the confidence and error parameters. Furthermore, we study hypothesis selection in three alternative settings, resolving or making progress on several open questions from prior works. (1) We settle the optimal approximation factor when bounding the extit{expected distance} of the output hypothesis, rather than its high-probability performance. (2) Assuming the numerical value of extit{$mathsf{OPT}$ is known} in advance, we present an algorithm obtaining $C=3$ and runtime $ ilde{O}(n/varepsilon^2)$ with the optimal sample complexity and succeeding with high probability in $n$. (3) Allowing polynomial extit{preprocessing} step on the hypothesis class $mathcal{H}$ before observing samples, we present an algorithm with $C=3$ and subquadratic runtime which succeeds with high probability in $n$.
Problem

Research questions and friction points this paper is trying to address.

Optimizing time complexity for hypothesis selection with total variation distance
Determining the optimal approximation factor for expected distance output
Achieving efficient algorithms with known OPT value and preprocessing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reduced runtime to O(n/(δε²)) for optimal approximation
Achieved C=3 factor with known OPT and preprocessing
Settled optimal expected distance bounds for output
🔎 Similar Papers
No similar papers found.