An explainable hybrid deep learning-enabled intelligent fault detection and diagnosis approach for automotive software systems validation

📅 2026-02-01
🏛️ Knowledge-Based Systems
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limitations of traditional black-box fault diagnosis models, which lack interpretability and thus fail to meet the stringent demands for trustworthiness and efficiency in real-time, safety-critical automotive software systems. To overcome this, the authors propose a hybrid deep learning architecture that integrates one-dimensional convolutional neural networks (1D CNN) with gated recurrent units (GRU). For the first time, multiple explainable AI (XAI) techniques—including Integrated Gradients, DeepLIFT, and Gradient SHAP—are systematically incorporated into this framework to enable high-accuracy fault detection, identification, and localization. Experimental results on real hardware-in-the-loop virtual driving test data demonstrate that the proposed approach not only significantly enhances diagnostic transparency but also effectively supports root cause analysis and model self-adaptation, offering a practical and interpretable solution for real-time fault diagnosis in safety-critical systems.

Technology Category

Application Category

📝 Abstract
Advancements in data-driven machine learning have emerged as a pivotal element in supporting automotive software systems (ASSs) engineering across various levels of the V-development process. Duringsystemverificationandvalidation,theintegrationofanintelligent fault detection anddiagnosis (FDD) model with test recordings analysis process serves as a powerful tool for efficiency ensuring functional safety. However, the lack of interpretability of the black-box FDD models developed not only hinders understanding of the cause underlying the prediction, but also prevents the model from being adapted based on the prediction result. This, in turn, increases the computational cost required for developingacomplexFDDmodelandlimitsconfidenceinreal-timesafety-criticalapplications.To address this challenge, a novel explainable method for fault detection, identification, and localization is proposed in this article with the aim of providing a clear understanding of the logic behind the prediction outcome. To this end, a hybrid 1dCNN-GRU-based intelligent model was developed to analyze the recordings from the real-time validation process of ASSs. The employment of explainable AI techniques, i.e., IGs, DeepLIFT, Gradient SHAP, and DeepLIFT SHAP, was instrumental in enabling model adaptation and facilitating the root cause analysis (RCA). The proposed approach is applied to the real time dataset collected during a virtual test drive performed by the user on hardware in the loop system.
Problem

Research questions and friction points this paper is trying to address.

fault detection and diagnosis
explainability
automotive software systems
black-box models
functional safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable AI
Hybrid Deep Learning
Fault Detection and Diagnosis
1D CNN-GRU
Root Cause Analysis