🤖 AI Summary
This work addresses uncertainty propagation in multi-fidelity trajectory simulation under constrained computational budgets and without prior allocation of hyperparameter tuning overhead. We propose an online automated hyperparameter optimization framework that exploits correlations and cost disparities between high- and low-fidelity models, dynamically co-optimizing low-fidelity hyperparameters during a single simulation run to adapt configurations in real time. Compared to manual tuning, our method significantly reduces estimation variance and approaches the optimal performance bound under low budgets; in realistic entry-descent-landing mission validation, it improves estimation accuracy by up to 32%, while converging to the known optimal parameter performance boundary under higher budgets. The core innovation lies in embedding hyperparameter optimization directly into the uncertainty propagation pipeline, enabling a tightly coupled, adaptive “simulate-optimize-propagate” mechanism.
📝 Abstract
Multifidelity uncertainty propagation combines the efficiency of low-fidelity models with the accuracy of a high-fidelity model to construct statistical estimators of quantities of interest. It is well known that the effectiveness of such methods depends crucially on the relative correlations and computational costs of the available computational models. However, the question of how to automatically tune low-fidelity models to maximize performance remains an open area of research. This work investigates automated model tuning, which optimizes model hyperparameters to minimize estimator variance within a target computational budget. Focusing on multifidelity trajectory simulation estimators, the cost-versus-precision tradeoff enabled by this approach is demonstrated in a practical, online setting where upfront tuning costs cannot be amortized. Using a real-world entry, descent, and landing example, it is shown that automated model tuning largely outperforms hand-tuned models even when the overall computational budget is relatively low. Furthermore, for scenarios where the computational budget is large, model tuning solutions can approach the best-case multifidelity estimator performance where optimal model hyperparameters are known a priori. Recommendations for applying model tuning in practice are provided and avenues for enabling adoption of such approaches for budget-constrained problems are highlighted.