CAGES: Cost-Aware Gradient Entropy Search for Efficient Local Multi-Fidelity Bayesian Optimization

📅 2024-05-13
🏛️ IEEE Conference on Decision and Control
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Multi-fidelity Bayesian optimization (MF-BO) for high-dimensional expensive black-box functions suffers from low sample efficiency and strong modeling assumptions regarding inter-fidelity relationships. Method: We propose the first gradient-learning-oriented MF-BO framework, abandoning prior assumptions about fidelity correlations. It employs local Gaussian process modeling, introduces a model-agnostic, cost-aware gradient entropy search acquisition criterion, and integrates multi-fidelity gradient entropy estimation with adaptive fidelity selection. Contribution/Results: Our information-theoretic acquisition function maximizes gradient information gain per unit evaluation cost. Experiments on synthetic benchmarks and reinforcement learning policy optimization demonstrate significant improvements over state-of-the-art methods: under equal budget constraints, gradient estimation accuracy improves by up to 3.2×.

Technology Category

Application Category

📝 Abstract
Bayesian optimization (BO) is a popular approach for optimizing expensive-to-evaluate black-box objective functions. An important challenge in BO is its application to high-dimensional search spaces due in large part to the curse of dimensionality. One way to overcome this challenge is to focus on local BO methods that aim to efficiently learn gradients, which have shown strong empirical performance on highdimensional problems including policy search in reinforcement learning (RL). Current local BO methods assume access to only a single high-fidelity information source whereas, in many problems, one has access to multiple cheaper approximations of the objective. We propose a novel algorithm, Cost-Aware Gradient Entropy Search (CAGES), for local BO of multifidelity black-box functions. CAGES makes no assumption about the relationship between different information sources, making it more flexible than other multi-fidelity methods. It also employs a new information-theoretic acquisition function, which enables systematic identification of samples that maximize the information gain about the unknown gradient per evaluation cost. We demonstrate CAGES can achieve significant performance improvements compared to other state-of-the-art methods on synthetic and benchmark RL problems.
Problem

Research questions and friction points this paper is trying to address.

Optimizing high-dimensional black-box functions efficiently
Handling multi-fidelity data sources flexibly
Maximizing gradient information gain cost-effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cost-aware gradient entropy search algorithm
Multi-fidelity black-box function optimization
Information-theoretic acquisition function
🔎 Similar Papers
No similar papers found.