Explainable Artificial Intelligence based Soft Evaluation Indicator for Arc Fault Diagnosis

๐Ÿ“… 2025-07-21
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing AI-based arc-fault diagnosis models achieve high accuracy but suffer from poor interpretability, undermining output credibility. To address this, we first formulate formal interpretability criteria for arc-fault detection and propose a novel soft evaluation metric grounded in eXplainable AI (XAI). We further design a lightweight balanced neural network that jointly optimizes classification accuracy, model interpretability, and feature extraction capability. Our method is rigorously validated on a real-world arc-fault dataset featuring multiple noise levels and sampling rates, demonstrating significant improvements in model transparency and decision trustworthiness across two heterogeneous experimental scenariosโ€”while maintaining state-of-the-art classification accuracy. The core contributions are threefold: (1) the first quantitative, domain-specific interpretability standard for arc-fault diagnosis; (2) a new XAI-aware evaluation framework that harmonizes accuracy and interpretability; and (3) an efficient neural architecture enabling synergistic optimization of both objectives.

Technology Category

Application Category

๐Ÿ“ Abstract
Novel AI-based arc fault diagnosis models have demonstrated outstanding performance in terms of classification accuracy. However, an inherent problem is whether these models can actually be trusted to find arc faults. In this light, this work proposes a soft evaluation indicator that explains the outputs of arc fault diagnosis models, by defining the the correct explanation of arc faults and leveraging Explainable Artificial Intelligence and real arc fault experiments. Meanwhile, a lightweight balanced neural network is proposed to guarantee competitive accuracy and soft feature extraction score. In our experiments, several traditional machine learning methods and deep learning methods across two arc fault datasets with different sample times and noise levels are utilized to test the effectiveness of the soft evaluation indicator. Through this approach, the arc fault diagnosis models are easy to understand and trust, allowing practitioners to make informed and trustworthy decisions.
Problem

Research questions and friction points this paper is trying to address.

Develops explainable AI indicator for arc fault diagnosis trustworthiness
Proposes lightweight neural network for accurate soft feature extraction
Tests evaluation indicator on diverse datasets and noise levels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable AI for arc fault diagnosis
Lightweight balanced neural network
Soft evaluation indicator for trust
๐Ÿ”Ž Similar Papers
No similar papers found.
Qianchao Wang
Qianchao Wang
Southeast University/ The Hong Kong Polytechnic University
deep learningenergy systemphysics-informed networksinterpretability
Yuxuan Ding
Yuxuan Ding
Qualcomm AI Research
Vision-and-LanguageLarge Language ModelEfficient AI
C
Chuanzhen Jia
Department of Building Environment and Energy Engineering, Hong Kong Polytechnic University, Hung Hom, Hong Kong
Z
Zhe Li
Shenzhen Power Supply Bureau Co., Ltd, Guangzhou, China
Y
Yaping Du
Department of Building Environment and Energy Engineering, Hong Kong Polytechnic University, Hung Hom, Hong Kong