๐ค AI Summary
Existing AI-based arc-fault diagnosis models achieve high accuracy but suffer from poor interpretability, undermining output credibility. To address this, we first formulate formal interpretability criteria for arc-fault detection and propose a novel soft evaluation metric grounded in eXplainable AI (XAI). We further design a lightweight balanced neural network that jointly optimizes classification accuracy, model interpretability, and feature extraction capability. Our method is rigorously validated on a real-world arc-fault dataset featuring multiple noise levels and sampling rates, demonstrating significant improvements in model transparency and decision trustworthiness across two heterogeneous experimental scenariosโwhile maintaining state-of-the-art classification accuracy. The core contributions are threefold: (1) the first quantitative, domain-specific interpretability standard for arc-fault diagnosis; (2) a new XAI-aware evaluation framework that harmonizes accuracy and interpretability; and (3) an efficient neural architecture enabling synergistic optimization of both objectives.
๐ Abstract
Novel AI-based arc fault diagnosis models have demonstrated outstanding performance in terms of classification accuracy. However, an inherent problem is whether these models can actually be trusted to find arc faults. In this light, this work proposes a soft evaluation indicator that explains the outputs of arc fault diagnosis models, by defining the the correct explanation of arc faults and leveraging Explainable Artificial Intelligence and real arc fault experiments. Meanwhile, a lightweight balanced neural network is proposed to guarantee competitive accuracy and soft feature extraction score. In our experiments, several traditional machine learning methods and deep learning methods across two arc fault datasets with different sample times and noise levels are utilized to test the effectiveness of the soft evaluation indicator. Through this approach, the arc fault diagnosis models are easy to understand and trust, allowing practitioners to make informed and trustworthy decisions.