🤖 AI Summary
This paper studies the optimal allocation of variances among Gaussian random variables under a fixed total variance constraint to maximize the expected supremum. The problem arises in auction mechanism design and mixed-model learning in quantitative genetics. The authors first characterize the structural properties of optimal solutions, establishing that variance should concentrate on a sparse subset of variables (“sparsity-concentration”). Leveraging this insight, they propose a polynomial-time approximation scheme (PTAS) for the single-group case—achieving (1−ε)-approximation in poly(n, 1/ε) time for any ε > 0—and an O(log n)-approximation algorithm for the multi-group case (m > 1). Their approach integrates tools from Gaussian process theory, probabilistic upper bound analysis, and combinatorial optimization. The theoretical guarantees strictly improve upon prior work, while the algorithms remain computationally tractable and practically implementable.
📝 Abstract
We design efficient approximation algorithms for maximizing the expectation of the supremum of families of Gaussian random variables. In particular, let $mathrm{OPT}:=max_{sigma_1,cdots,sigma_n}mathbb{E}left[sum_{j=1}^{m}max_{iin S_j} X_i
ight]$, where $X_i$ are Gaussian, $S_jsubset[n]$ and $sum_isigma_i^2=1$, then our theoretical results include: - We characterize the optimal variance allocation -- it concentrates on a small subset of variables as $|S_j|$ increases, - A polynomial time approximation scheme (PTAS) for computing $mathrm{OPT}$ when $m=1$, and - An $O(log n)$ approximation algorithm for computing $mathrm{OPT}$ for general $m>1$. Such expectation maximization problems occur in diverse applications, ranging from utility maximization in auctions markets to learning mixture models in quantitative genetics.