🤖 AI Summary
This study addresses the limitation of traditional demand forecasting models that rely on statistical metrics such as MAE and RMSE, which often fail to capture their real-world impact on inventory key performance indicators (KPIs)—particularly total cost and service level—in intermittent demand contexts like automotive aftermarket spare parts. To bridge this gap, the authors propose a decision-centric simulation framework that integrates a synthetic demand generator, a plug-and-play forecasting module, and an inventory control simulator. This framework systematically establishes, for the first time, a mapping between forecast errors and inventory KPIs, enabling end-to-end evaluation of any forecasting model under realistic inventory policies. It reveals that improvements in statistical accuracy do not necessarily translate into better operational performance and provides a cost–service trade-off–oriented basis for model selection.
📝 Abstract
Efficient management of spare parts inventory is crucial in the automotive aftermarket, where demand is highly intermittent and uncertainty drives substantial cost and service risks. Forecasting is therefore central, but the quality of forecasting models should be judged not by statistical accuracy (e.g., MAE, RMSE) but rather by its impact on key operational performance indicators (KPIs), such as total cost and service level. Yet most existing work evaluates models exclusively using accuracy metrics, and the relationship between these metrics and KPIs remains poorly understood. To address this gap, we propose a decision-centric simulation software framework that enables systematic evaluation of forecasting models in realistic inventory management setting. The framework comprises: (i) a synthetic demand generator tailored to spare-parts demand characteristics, (ii) a flexible forecasting module that can host arbitrary predictive models, and (iii) an inventory control simulator that consumes the forecasts and computes operational KPIs. This closed-loop setup enables researchers to evaluate models not only in terms of statistical error but also in terms of downstream inventory implications. Using a wide range of simulation scenarios, we show that improvements in accuracy metrics do not necessarily lead to better KPIs, and that models with similar error profiles can induce different cost-service trade-offs. We analyze these discrepancies to characterize how forecast performance affects inventory outcomes and derive guidance for model selection. Overall, the framework links demand forecasting and inventory management, shifting evaluation from predictive accuracy toward operational relevance in the automotive aftermarket and related domains. An open-source implementation of the software is available at https://github.com/caisr-hh/TruckParts-Demand-Inventory-Simulator/releases/tag/IDA_2026.