π€ AI Summary
This study addresses the limited interpretability of point forecasts and their associated uncertainties in hierarchical time series demand forecasting within industrial settingsβa gap that hinders effective supply chain decision-making. To bridge this gap, we propose an interpretable framework for large-scale hierarchical probabilistic forecasting that uniquely integrates feature attribution, uncertainty quantification, and counterfactual analysis. This unified approach simultaneously explains the prediction outcomes, identifies sources of uncertainty, and elucidates how changes in training data influence model behavior. Evaluated on a semi-synthetic dataset from a chemical enterprise encompassing tens of thousands of products, our method demonstrates significantly improved explanation accuracy. Furthermore, multiple real-world case studies illustrate its practical utility in uncovering key demand drivers, thereby enhancing model trustworthiness and facilitating operational deployment.
π Abstract
Hierarchical time-series forecasting is essential for demand prediction across various industries. While machine learning models have obtained significant accuracy and scalability on such forecasting tasks, the interpretability of their predictions, informed by application, is still largely unexplored. To bridge this gap, we introduce a novel interpretability method for large hierarchical probabilistic time-series forecasting, adapting generic interpretability techniques while addressing challenges associated with hierarchical structures and uncertainty. Our approach offers valuable interpretative insights in response to real-world industrial supply chain scenarios, including 1) the significance of various time-series within the hierarchy and external variables at specific time points, 2) the impact of different variables on forecast uncertainty, and 3) explanations for forecast changes in response to modifications in the training dataset. To evaluate the explainability method, we generate semi-synthetic datasets based on real-world scenarios of explaining hierarchical demands for over ten thousand products at a large chemical company. The experiments showed that our explainability method successfully explained state-of-the-art industrial forecasting methods with significantly higher explainability accuracy. Furthermore, we provide multiple real-world case studies that show the efficacy of our approach in identifying important patterns and explanations that help stakeholders better understand the forecasts. Additionally, our method facilitates the identification of key drivers behind forecasted demand, enabling more informed decision-making and strategic planning. Our approach helps build trust and confidence among users, ultimately leading to better adoption and utilization of hierarchical forecasting models in practice.