Knowledge Gradient for Multi-Objective Bayesian Optimization with Decoupled Evaluations

📅 2023-02-02
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
In practical multi-objective Bayesian optimization (MOBO), objectives often exhibit asynchronous and decoupled evaluation—i.e., differing evaluation latencies, costs, and availability—yet conventional MOBO methods assume synchronous, coupled evaluations. Method: This paper introduces the first decoupled-evaluation-aware multi-objective knowledge gradient (MO-KG) framework. It explicitly models inter-objective evaluation delays and cost heterogeneity, designs a target-adaptive sampling policy, and enables efficient MO-KG computation via Gaussian process surrogates and Monte Carlo gradient estimation. Contribution/Results: Theoretically, the Pareto front estimator is proven asymptotically consistent. Empirically, on standard benchmarks, the method reduces average total evaluations by up to 37% while significantly improving Pareto coverage and hypervolume (HV). This work constitutes the first generalization of the knowledge gradient to decoupled multi-objective settings, jointly optimizing evaluation cost and Pareto front convergence.
Problem

Research questions and friction points this paper is trying to address.

Optimizes multi-objective trade-offs with minimal samples.
Handles decoupled objectives with varying evaluation costs.
Proposes a cost-aware acquisition function for faster learning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scalarization-based knowledge gradient acquisition function
Decoupled evaluations for multi-objective optimization
Asymptotic consistency in D-dimensional search space
🔎 Similar Papers
No similar papers found.
J
Jack M. Buckingham
MathSys CDT, University of Warwick, Coventry, UK
S
Sebastian Rojas Gonzalez
Surrogate Modeling Lab, Gent University, Belgium; Data Science Institute, Hasselt University, Belgium
Juergen Branke
Juergen Branke
Professor of Operational Research and Systems, Warwick Business School
simulation optimizationmetaheuristicsBayesian optimizationmulti-objective optimization