Bayesian Optimization for Non-Convex Two-Stage Stochastic Optimization Problems

📅 2024-08-30
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
We address expensive, non-convex, black-box two-stage stochastic optimization problems—common in simulation-driven applications—where conventional stochastic programming methods fail due to their reliance on convexity and low-cost function evaluations. Our approach jointly optimizes both first-stage (“here-and-now”) and second-stage (“wait-and-see”) decision variables. We propose the first knowledge-gradient (KG)-based acquisition function for joint variable optimization, accompanied by asymptotic consistency guarantees. To enable practical implementation, we develop an efficient Monte Carlo approximation algorithm. Empirical evaluation across multiple simulation benchmarks demonstrates that our method significantly outperforms standard baselines—including alternating optimization and naive two-step strategies—and surpasses existing state-of-the-art approaches. Moreover, its performance is comparable to more accurate yet computationally intensive surrogate-based alternatives, establishing a new trade-off frontier between accuracy and efficiency.

Technology Category

Application Category

📝 Abstract
Bayesian optimization is a sample-efficient method for solving expensive, black-box optimization problems. Stochastic programming concerns optimization under uncertainty where, typically, average performance is the quantity of interest. In the first stage of a two-stage problem, here-and-now decisions must be made in the face of uncertainty, while in the second stage, wait-and-see decisions are made after the uncertainty has been resolved. Many methods in stochastic programming assume that the objective is cheap to evaluate and linear or convex. We apply Bayesian optimization to solve non-convex, two-stage stochastic programs which are black-box and expensive to evaluate as, for example, is often the case with simulation objectives. We formulate a knowledge-gradient-based acquisition function to jointly optimize the first- and second-stage variables, establish a guarantee of asymptotic consistency, and provide a computationally efficient approximation. We demonstrate comparable empirical results to an alternative we formulate with fewer approximations, which alternates its focus between the two variable types, and superior empirical results over the state of the art and the standard, na""ive, two-step benchmark.
Problem

Research questions and friction points this paper is trying to address.

Optimizes non-convex two-stage stochastic programs.
Uses Bayesian optimization for expensive evaluations.
Develops efficient knowledge-gradient acquisition function.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian optimization for non-convex problems
Knowledge-gradient-based acquisition function
Asymptotic consistency guarantee
🔎 Similar Papers
No similar papers found.
J
Jack M. Buckingham
EPSRC Centre for Doctoral Training in Mathematics for Real-World Systems, Mathematics Institute, University of Warwick
I
I. Couckuyt
Faculty of Engineering and Architecture, Ghent University - imec
Juergen Branke
Juergen Branke
Professor of Operational Research and Systems, Warwick Business School
simulation optimizationmetaheuristicsBayesian optimizationmulti-objective optimization