Approximation-Aware Bayesian Optimization

📅 2024-06-06
🏛️ Neural Information Processing Systems
📈 Citations: 3
Influential: 1
📄 PDF
🤖 AI Summary
High-dimensional Bayesian optimization (e.g., molecular design) suffers from expensive function evaluations and degraded acquisition quality due to standard sparse variational Gaussian processes (SVGP), which prioritize global posterior fidelity over decision-relevant accuracy. This paper proposes a utility-calibrated variational inference framework that, for the first time, unifies Gaussian process approximation and decision-oriented data acquisition into a single joint optimization problem—subject to computational budget constraints—to guarantee acquisition-policy optimality. The framework is compatible with theoretically grounded acquisition functions—including expected improvement (EI) and knowledge gradient—and supports trust-region (e.g., TuRBO) and batch-optimization settings. On high-dimensional benchmark tasks in control and molecular design, it significantly outperforms standard SVGP: achieving superior solutions with fewer function evaluations, faster convergence, and higher overall optimization efficiency.

Technology Category

Application Category

📝 Abstract
High-dimensional Bayesian optimization (BO) tasks such as molecular design often require 10,000 function evaluations before obtaining meaningful results. While methods like sparse variational Gaussian processes (SVGPs) reduce computational requirements in these settings, the underlying approximations result in suboptimal data acquisitions that slow the progress of optimization. In this paper we modify SVGPs to better align with the goals of BO: targeting informed data acquisition rather than global posterior fidelity. Using the framework of utility-calibrated variational inference, we unify GP approximation and data acquisition into a joint optimization problem, thereby ensuring optimal decisions under a limited computational budget. Our approach can be used with any decision-theoretic acquisition function and is compatible with trust region methods like TuRBO. We derive efficient joint objectives for the expected improvement and knowledge gradient acquisition functions in both the standard and batch BO settings. Our approach outperforms standard SVGPs on high-dimensional benchmark tasks in control and molecular design.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational cost in high-dimensional Bayesian optimization
Improving data acquisition efficiency in sparse variational Gaussian processes
Unifying GP approximation and data acquisition for optimal decisions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modifies SVGPs for better data acquisition
Unifies GP approximation and data acquisition
Compatible with trust region methods
🔎 Similar Papers
No similar papers found.