BISTRO - A Bi-Fidelity Stochastic Gradient Framework using Trust-Regions for Optimization Under Uncertainty

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
For stochastic optimization problems involving expensive high-fidelity simulations, existing bifidelity methods exploit either design-space curvature or stochastic-space correlation in isolation, compromising either convergence speed or accuracy. This paper proposes a bifidelity stochastic trust-region framework: it first leverages low-fidelity model curvature information to rapidly identify a neighborhood of the local minimum of the high-fidelity objective; then switches to variance-reduced stochastic gradient methods (e.g., SVRG or SPIDER) for fine-grained optimization. It is the first to synergistically integrate curvature guidance with stochastic-space correlation modeling, featuring a curvature-driven, fidelity-adaptive switching mechanism and establishing an $O(1/n)$ convergence rate guarantee. On benchmark problems and a 20-dimensional spacecraft re-entry application, the method achieves significantly faster convergence than adaptive sampling and variance-reduction baselines, reducing computational cost by up to 29×.

Technology Category

Application Category

📝 Abstract
Stochastic optimization of engineering systems is often infeasible due to repeated evaluations of a computationally expensive, high-fidelity simulation. Bi-fidelity methods mitigate this challenge by leveraging a cheaper, approximate model to accelerate convergence. Most existing bi-fidelity approaches, however, exploit either design-space curvature or random-space correlation, not both. We present BISTRO - a BI-fidelity Stochastic Trust-Region Optimizer for unconstrained optimization under uncertainty through a stochastic approximation procedure. This approach exploits the curvature information of a low-fidelity objective function to converge within a basin of a local minimum of the high-fidelity model where low-fidelity curvature information is no longer valuable. The method then switches to a variance-reduced stochastic gradient descent procedure. We provide convergence guarantees in expectation under certain regularity assumptions and ensure the best-case $mathcal{O}(1/n)$ convergence rate for stochastic optimization. On benchmark problems and a 20-dimensional space shuttle reentry case, BISTRO converges faster than adaptive sampling and variance reduction procedures and cuts computational expense by up to 29x.
Problem

Research questions and friction points this paper is trying to address.

Optimizes engineering systems under uncertainty using bi-fidelity models
Combines design-space curvature and random-space correlation for efficiency
Reduces computational cost while ensuring convergence guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bi-fidelity trust-region optimizer for uncertainty
Switches to variance-reduced stochastic gradient descent
Ensures O(1/n) convergence rate with lower cost
T
Thomas O. Dixon
Department of Aerospace Engineering, University of Michigan, Ann Arbor, MI
G
Geoffrey F. Bomarito
Durability, Damage Tolerance, and Reliability Branch, NASA Langley Research Center, Hampton, VA
James E. Warner
James E. Warner
NASA Langley Research Center
Uncertainty QuantificationScientific Machine Learning
A
Alex A. Gorodetsky
Department of Aerospace Engineering, University of Michigan, Ann Arbor, MI