🤖 AI Summary
To address optimization problems with high computational cost, this paper proposes a multifidelity trust-region method that leverages low-fidelity (low-accuracy) surrogate models to accelerate convergence. The core method integrates the Magical Trust Region framework with coarse-grained models via two novel algorithms: Sketched TR—employing random matrix projection for dimensionality reduction—and SVD TR—utilizing truncated singular value decomposition to extract dominant directions of variation. Both construct efficient low-dimensional approximations to guide the search. Crucially, the method rigorously incorporates the trust-region subproblem solving mechanism, ensuring theoretical global convergence. Numerical experiments demonstrate that the proposed approaches maintain comparable convergence rates while substantially reducing per-iteration computational overhead, achieving average speedups of 2.1–3.8× over baseline methods. The results highlight superior computational efficiency and strong practical applicability for expensive optimization tasks.
📝 Abstract
We introduce two multifidelity trust-region methods based on the Magical Trust Region (MTR) framework. MTR augments the classical trust-region step with a secondary, informative direction. In our approaches, the secondary ``magical'' directions are determined by solving coarse trust-region subproblems based on low-fidelity objective models. The first proposed method, Sketched Trust-Region (STR), constructs this secondary direction using a sketched matrix to reduce the dimensionality of the trust-region subproblem. The second method, SVD Trust-Region (SVDTR), defines the magical direction via a truncated singular value decomposition of the dataset, capturing the leading directions of variability. Several numerical examples illustrate the potential gain in efficiency.