LITE: Efficiently Estimating Gaussian Probability of Maximality

📅 2025-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the high computational complexity and memory overhead associated with estimating the Probability of Maximum (PoM) for Gaussian random vectors. To tackle this, we propose the first algorithm achieving near-linear time and space complexity. Methodologically, we formulate PoM estimation as an entropy-regularized Upper Confidence Bound (UCB) optimization problem, integrating Gaussian integral approximation with exploitation of low-rank covariance structure to yield a scalable sampling and estimation framework. Theoretically, our approach unifies and improves upon existing estimators. Empirically, it achieves state-of-the-art accuracy across diverse benchmark tasks, runs orders of magnitude faster than mainstream baselines, and significantly enhances downstream performance—including entropy estimation and optimal control in multi-armed bandits. This enables efficient fine-grained action analysis in applications such as Bayesian optimization, reinforcement learning, and drug discovery.

Technology Category

Application Category

📝 Abstract
We consider the problem of computing the probability of maximality (PoM) of a Gaussian random vector, i.e., the probability for each dimension to be maximal. This is a key challenge in applications ranging from Bayesian optimization to reinforcement learning, where the PoM not only helps with finding an optimal action, but yields a fine-grained analysis of the action domain, crucial in tasks such as drug discovery. Existing techniques are costly, scaling polynomially in computation and memory with the vector size. We introduce LITE, the first approach for estimating Gaussian PoM with almost-linear time and memory complexity. LITE achieves SOTA accuracy on a number of tasks, while being in practice several orders of magnitude faster than the baselines. This also translates to a better performance on downstream tasks such as entropy estimation and optimal control of bandits. Theoretically, we cast LITE as entropy-regularized UCB and connect it to prior PoM estimators.
Problem

Research questions and friction points this paper is trying to address.

Gaussian Random Vector
Computation Efficiency
Memory Demand
Innovation

Methods, ideas, or system contributions that make the work stand out.

LITE
Gaussian Random Vector
Optimization Method
🔎 Similar Papers
No similar papers found.
N
Nicolas Menet
ETH Zurich
J
Jonas Hubotter
ETH Zurich
Parnian Kassraie
Parnian Kassraie
Google DeepMind
Machine LearningOptimizationSamplingStatistics
A
Andreas Krause
ETH Zurich