LAGO: A Local-Global Optimization Framework Combining Trust Region Methods and Bayesian Optimization

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in black-box optimization of balancing global exploration with rapid local convergence. To this end, the authors propose the LAGO framework, which synergistically combines gradient-enhanced Bayesian optimization and trust-region local optimization through an adaptive competition mechanism. At each iteration, LAGO generates candidate points in parallel from both global and local strategies and dynamically selects the evaluation point based on predicted improvement. A key innovation lies in decoupling global exploration and local refinement at the candidate generation stage, while introducing a length-scale-based minimum distance criterion to filter local data, thereby mitigating numerical instability and ensuring both global coverage and efficient local convergence. Experimental results demonstrate that LAGO significantly outperforms conventional nonlinear local optimization methods on smooth functions, achieving superior global exploration capability alongside fast local convergence.

Technology Category

Application Category

📝 Abstract
We introduce LAGO, a LocAl-Global Optimization algorithm that combines gradient-enhanced Bayesian Optimization (BO) with gradient-based trust region local refinement through an adaptive competition mechanism. At each iteration, global and local optimization strategies independently propose candidate points, and the next evaluation is selected based on predicted improvement. LAGO separates global exploration from local refinement at the proposal level: the BO acquisition function is optimized outside the active trust region, while local function and gradient evaluations are incorporated into the global gradient-enhanced Gaussian process only when they satisfy a lengthscale-based minimum-distance criterion, reducing the risk of numerical instability during the local exploitation. This enables efficient local refinement when reaching promising regions, without sacrificing a global search of the design space. As a result, the method achieves an improved exploration of the full design space compared to standard non-linear local optimization algorithms for smooth functions, while maintaining fast local convergence in regions of interest.
Problem

Research questions and friction points this paper is trying to address.

global optimization
local refinement
Bayesian optimization
trust region methods
numerical stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Local-Global Optimization
Trust Region Methods
Bayesian Optimization
Gradient-Enhanced Gaussian Process
Adaptive Competition Mechanism
🔎 Similar Papers
No similar papers found.
E
Eliott Van Dieren
Institut de Mathématiques, École Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland
T
Tommaso Vanzan
Dipartimento di Scienze Matematiche, Politecnico di Torino, 10129 Turin, Italy
Fabio Nobile
Fabio Nobile
MATH - CSQI, Ecole Polytechnique Fédérale de Lausanne (EPFL)
numerical analysisuncertainty quantificationfluid structure interactioncardiovascuarfinite elements