Automated Model Tuning for Multifidelity Uncertainty Propagation in Trajectory Simulation

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses uncertainty propagation in multi-fidelity trajectory simulation under constrained computational budgets and without prior allocation of hyperparameter tuning overhead. We propose an online automated hyperparameter optimization framework that exploits correlations and cost disparities between high- and low-fidelity models, dynamically co-optimizing low-fidelity hyperparameters during a single simulation run to adapt configurations in real time. Compared to manual tuning, our method significantly reduces estimation variance and approaches the optimal performance bound under low budgets; in realistic entry-descent-landing mission validation, it improves estimation accuracy by up to 32%, while converging to the known optimal parameter performance boundary under higher budgets. The core innovation lies in embedding hyperparameter optimization directly into the uncertainty propagation pipeline, enabling a tightly coupled, adaptive “simulate-optimize-propagate” mechanism.

Technology Category

Application Category

📝 Abstract
Multifidelity uncertainty propagation combines the efficiency of low-fidelity models with the accuracy of a high-fidelity model to construct statistical estimators of quantities of interest. It is well known that the effectiveness of such methods depends crucially on the relative correlations and computational costs of the available computational models. However, the question of how to automatically tune low-fidelity models to maximize performance remains an open area of research. This work investigates automated model tuning, which optimizes model hyperparameters to minimize estimator variance within a target computational budget. Focusing on multifidelity trajectory simulation estimators, the cost-versus-precision tradeoff enabled by this approach is demonstrated in a practical, online setting where upfront tuning costs cannot be amortized. Using a real-world entry, descent, and landing example, it is shown that automated model tuning largely outperforms hand-tuned models even when the overall computational budget is relatively low. Furthermore, for scenarios where the computational budget is large, model tuning solutions can approach the best-case multifidelity estimator performance where optimal model hyperparameters are known a priori. Recommendations for applying model tuning in practice are provided and avenues for enabling adoption of such approaches for budget-constrained problems are highlighted.
Problem

Research questions and friction points this paper is trying to address.

Automated tuning of low-fidelity models to minimize estimator variance
Optimizing hyperparameters within constrained computational budgets for trajectory simulation
Addressing performance gap between hand-tuned and optimally tuned models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated hyperparameter tuning for models
Minimizes estimator variance within budget
Optimizes low-fidelity models for performance
🔎 Similar Papers
No similar papers found.
James E. Warner
James E. Warner
NASA Langley Research Center
Uncertainty QuantificationScientific Machine Learning
G
Geoffrey F. Bomarito
NASA Langley Research Center, Hampton, VA 23681, USA
G
Gianluca Geraci
Sandia National Laboratories, Albuquerque, NM 87185, USA
M
Michael S. Eldred
Sandia National Laboratories, Albuquerque, NM 87185, USA