🤖 AI Summary
For stochastic optimization problems involving expensive high-fidelity simulations, existing bifidelity methods exploit either design-space curvature or stochastic-space correlation in isolation, compromising either convergence speed or accuracy. This paper proposes a bifidelity stochastic trust-region framework: it first leverages low-fidelity model curvature information to rapidly identify a neighborhood of the local minimum of the high-fidelity objective; then switches to variance-reduced stochastic gradient methods (e.g., SVRG or SPIDER) for fine-grained optimization. It is the first to synergistically integrate curvature guidance with stochastic-space correlation modeling, featuring a curvature-driven, fidelity-adaptive switching mechanism and establishing an $O(1/n)$ convergence rate guarantee. On benchmark problems and a 20-dimensional spacecraft re-entry application, the method achieves significantly faster convergence than adaptive sampling and variance-reduction baselines, reducing computational cost by up to 29×.
📝 Abstract
Stochastic optimization of engineering systems is often infeasible due to repeated evaluations of a computationally expensive, high-fidelity simulation. Bi-fidelity methods mitigate this challenge by leveraging a cheaper, approximate model to accelerate convergence. Most existing bi-fidelity approaches, however, exploit either design-space curvature or random-space correlation, not both. We present BISTRO - a BI-fidelity Stochastic Trust-Region Optimizer for unconstrained optimization under uncertainty through a stochastic approximation procedure. This approach exploits the curvature information of a low-fidelity objective function to converge within a basin of a local minimum of the high-fidelity model where low-fidelity curvature information is no longer valuable. The method then switches to a variance-reduced stochastic gradient descent procedure. We provide convergence guarantees in expectation under certain regularity assumptions and ensure the best-case $mathcal{O}(1/n)$ convergence rate for stochastic optimization. On benchmark problems and a 20-dimensional space shuttle reentry case, BISTRO converges faster than adaptive sampling and variance reduction procedures and cuts computational expense by up to 29x.