π€ AI Summary
To address the lack of interpretability in industrial process anomaly detection, this paper proposes the first method to adapt the ExIFFI interpretability framework to industrial settings, unifying anomaly identification with root-cause attribution. The approach extends Isolation Forest (EIF) by integrating quantitative feature importance estimation and instance-level counterfactual analysis, yielding fast, efficient, and locally interpretable attributions for anomaly decisions. Evaluated on two public industrial datasets, it significantly outperforms existing interpretable anomaly detection models in both explanation fidelity and computational efficiency, while maintaining high detection accuracy and real-time capability. Its core contributions are: (i) the first successful adaptation of ExIFFI to industrial anomaly detection; (ii) overcoming the limitations of black-box decision-making; and (iii) fulfilling Industry 5.0βs demand for trustworthy, human-understandable AI through transparent, actionable explanations.
π Abstract
Anomaly detection (AD) is a crucial process often required in industrial settings. Anomalies can signal underlying issues within a system, prompting further investigation. Industrial processes aim to streamline operations as much as possible, encompassing the production of the final product, making AD an essential mean to reach this goal. Conventional anomaly detection methodologies typically clas-sify observations as either normal or anomalous without providing insight into the reasons behind these classifications. Consequently, in light of the emergence of Industry 5.0, a more desirable approach involves providing interpretable outcomes, enabling users to understand the rationale behind the results. This paper presents the first industrial application of ExIFFI, a recently developed approach focused on the production of fast and efficient explanations for the Extended Isolation Forest (EIF) Anomaly detection method. ExIFFI is tested on two publicly available industrial datasets demonstrating superior effectiveness in explanations and computational efficiency with the respect to other state-of-the-art explainable AD models.