🤖 AI Summary
In practical multi-objective Bayesian optimization (MOBO), objectives often exhibit asynchronous and decoupled evaluation—i.e., differing evaluation latencies, costs, and availability—yet conventional MOBO methods assume synchronous, coupled evaluations.
Method: This paper introduces the first decoupled-evaluation-aware multi-objective knowledge gradient (MO-KG) framework. It explicitly models inter-objective evaluation delays and cost heterogeneity, designs a target-adaptive sampling policy, and enables efficient MO-KG computation via Gaussian process surrogates and Monte Carlo gradient estimation.
Contribution/Results: Theoretically, the Pareto front estimator is proven asymptotically consistent. Empirically, on standard benchmarks, the method reduces average total evaluations by up to 37% while significantly improving Pareto coverage and hypervolume (HV). This work constitutes the first generalization of the knowledge gradient to decoupled multi-objective settings, jointly optimizing evaluation cost and Pareto front convergence.