🤖 AI Summary
High-dimensional stochastic agent-based models (ABMs) are notoriously difficult to analyze systematically due to the curse of dimensionality and inherent stochasticity. This work proposes a multi-stage automated exploration framework that first employs model-driven experimental design to identify key variables and partition the parameter space, then leverages machine learning surrogate models to efficiently capture residual nonlinear interaction effects. The approach operates without human intervention, automatically detecting unstable regions within the simulator and enabling robust sensitivity analysis and policy testing. Applied to a predator–prey case study, the framework successfully isolates dominant variables and highly sensitive nonlinear regimes, substantially enhancing the efficiency and reliability of ABM exploration.
📝 Abstract
Systematic exploration of Agent-Based Models (ABMs) is challenged by the curse of dimensionality and their inherent stochasticity. We present a multi-stage pipeline integrating the systematic design of experiments with machine learning surrogates. Using a predator-prey case study, our methodology proceeds in two steps. First, an automated model-based screening identifies dominant variables, assesses outcome variability, and segments the parameter space. Second, we train Machine Learning models to map the remaining nonlinear interaction effects. This approach automates the discovery of unstable regions where system outcomes are highly dependent on nonlinear interactions between many variables. Thus, this work provides modelers with a rigorous, hands-off framework for sensitivity analysis and policy testing, even when dealing with high-dimensional stochastic simulators.