🤖 AI Summary
Under Industry 4.0, AI-driven supply chain optimization faces stakeholder resistance due to its “black-box” nature, creating a persistent trade-off between interpretability and performance. To address this, we propose a novel interpretable decision-making framework that synergistically integrates evolutionary computation with deep reinforcement learning. Specifically, our method automatically generates transparent, decision-tree-based policies within a discrete-event simulation-optimization environment explicitly designed to model supply chain uncertainty. This work represents the first deep integration of eXplainable Artificial Intelligence (XAI) with simulation-based optimization, challenging the conventional assumption that interpretability inherently compromises performance. Empirical evaluation across both synthetic and real-world supply chain scenarios demonstrates that our approach achieves decision quality on par with—or superior to—state-of-the-art optimization and end-to-end reinforcement learning baselines, while ensuring industrial deployability and cross-algorithm compatibility.
📝 Abstract
In the context of Industry 4.0, Supply Chain Management (SCM) faces challenges in adopting advanced optimization techniques due to the"black-box"nature of most AI-based solutions, which causes reluctance among company stakeholders. To overcome this issue, in this work, we employ an Interpretable Artificial Intelligence (IAI) approach that combines evolutionary computation with Reinforcement Learning (RL) to generate interpretable decision-making policies in the form of decision trees. This IAI solution is embedded within a simulation-based optimization framework specifically designed to handle the inherent uncertainties and stochastic behaviors of modern supply chains. To our knowledge, this marks the first attempt to combine IAI with simulation-based optimization for decision-making in SCM. The methodology is tested on two supply chain optimization problems, one fictional and one from the real world, and its performance is compared against widely used optimization and RL algorithms. The results reveal that the interpretable approach delivers competitive, and sometimes better, performance, challenging the prevailing notion that there must be a trade-off between interpretability and optimization efficiency. Additionally, the developed framework demonstrates strong potential for industrial applications, offering seamless integration with various Python-based algorithms.