đ€ AI Summary
This paper addresses the statistical testing problem of a composite null hypothesisâcomprising M candidate distributionsâagainst a simple alternative hypothesisâa single distributionâwith the objective of jointly identifying an approximately least-favorable null distribution and an approximately optimal test. We propose a convex optimization framework based on Stochastic Mirror Descent (SMD), the first systematic application of this algorithm to such composite hypothesis testing problems. We establish finite-step convergence guarantees and provide explicit design guidelines for the initial point, step size, and iteration count. The resulting test enjoys rigorous statistical guaranteesâincluding controlled Type-I error under the worst-case nullâand demonstrably outperforms existing heuristic approaches. Crucially, it achieves a favorable trade-off between robustnessâvia minimax optimality under distributional uncertaintyâand computational efficiencyâthrough scalable first-order updates.
đ Abstract
We consider a class of hypothesis testing problems where the null hypothesis postulates $M$ distributions for the observed data, and there is only one possible distribution under the alternative. We show that one can use a stochastic mirror descent routine for convex optimization to provably obtain - after finitely many iterations - both an approximate least-favorable distribution and a nearly optimal test, in a sense we make precise. Our theoretical results yield concrete recommendations about the algorithm's implementation, including its initial condition, its step size, and the number of iterations. Importantly, our suggested algorithm can be viewed as a slight variation of the algorithm suggested by Elliott, MĂŒller, and Watson (2015), whose theoretical performance guarantees are unknown.