Surrogate Modeling and Explainable Artificial Intelligence for Complex Systems: A Workflow for Automated Simulation Exploration

📅 2025-10-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Complex system simulation faces two major challenges: high computational cost and limited interpretability of black-box models. This paper proposes a surrogate-model-driven eXplainable Artificial Intelligence (XAI) workflow that unifies global sensitivity analysis, uncertainty quantification, and local attribution methods to jointly model continuous and categorical variables. A novel explanation consistency evaluation mechanism is introduced to dynamically diagnose surrogate model adequacy, thereby guiding data acquisition optimization and structural refinement. Leveraging experimental design, a compact training dataset is constructed to train lightweight surrogates, enabling second-scale exploration of large-scale simulations. The method reveals nonlinear interactions and emergent behaviors, identifies critical design or policy levers, and pinpoints model weaknesses. Its effectiveness and generalizability are validated across diverse domains, including engineering design and socio-environmental simulation.

Technology Category

Application Category

📝 Abstract
Complex systems are increasingly explored through simulation-driven engineering workflows that combine physics-based and empirical models with optimization and analytics. Despite their power, these workflows face two central obstacles: (1) high computational cost, since accurate exploration requires many expensive simulator runs; and (2) limited transparency and reliability when decisions rely on opaque blackbox components. We propose a workflow that addresses both challenges by training lightweight emulators on compact designs of experiments that (i) provide fast, low-latency approximations of expensive simulators, (ii) enable rigorous uncertainty quantification, and (iii) are adapted for global and local Explainable Artificial Intelligence (XAI) analyses. This workflow unifies every simulation-based complex-system analysis tool, ranging from engineering design to agent-based models for socio-environmental understanding. In this paper, we proposea comparative methodology and practical recommendations for using surrogate-based explainability tools within the proposed workflow. The methodology supports continuous and categorical inputs, combines global-effect and uncertainty analyses with local attribution, and evaluates the consistency of explanations across surrogate models, thereby diagnosing surrogate adequacy and guiding further data collection or model refinement. We demonstrate the approach on two contrasting case studies: a multidisciplinary design analysis of a hybrid-electric aircraft and an agent-based model of urban segregation. Results show that the surrogate model and XAI coupling enables large-scale exploration in seconds, uncovers nonlinear interactions and emergent behaviors, identifies key design and policy levers, and signals regions where surrogates require more data or alternative architectures.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational costs of complex system simulations
Enhancing transparency in blackbox simulation components
Providing uncertainty quantification and explainable AI analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight emulators replace expensive simulators
Emulators enable rigorous uncertainty quantification
Surrogates adapted for explainable AI analyses
🔎 Similar Papers
No similar papers found.