EVolutionary Independent DEtermiNistiC Explanation

📅 2025-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Black-box deep neural networks suffer from poor interpretability, while existing eXplainable AI (XAI) methods exhibit instability and gradient dependence. Method: This paper proposes EVIDENCE—a deterministic, model-agnostic interpretability framework integrating evolutionary optimization, information-theoretic constraints, and spectral signal modeling to robustly identify input-critical features. It enables cross-modal interpretable feature filtering for audio domains including respiratory sounds, speech, and music. Results: On COVID-19 diagnosis, EVIDENCE achieves a 32% accuracy gain and 16% AUC improvement; for Parkinson’s disease classification, it attains an F1-score of 0.997; and on GTZAN music classification, it reaches an AUC of 0.996—consistently outperforming LIME, SHAP, and Grad-CAM. Notably, this work pioneers the integration of evolutionary principles with rigorous mathematical formalization in XAI, overcoming fundamental limitations of uncertainty and model coupling inherent in prior approaches.

Technology Category

Application Category

📝 Abstract
The widespread use of artificial intelligence deep neural networks in fields such as medicine and engineering necessitates understanding their decision-making processes. Current explainability methods often produce inconsistent results and struggle to highlight essential signals influencing model inferences. This paper introduces the Evolutionary Independent Deterministic Explanation (EVIDENCE) theory, a novel approach offering a deterministic, model-independent method for extracting significant signals from black-box models. EVIDENCE theory, grounded in robust mathematical formalization, is validated through empirical tests on diverse datasets, including COVID-19 audio diagnostics, Parkinson's disease voice recordings, and the George Tzanetakis music classification dataset (GTZAN). Practical applications of EVIDENCE include improving diagnostic accuracy in healthcare and enhancing audio signal analysis. For instance, in the COVID-19 use case, EVIDENCE-filtered spectrograms fed into a frozen Residual Network with 50 layers improved precision by 32% for positive cases and increased the area under the curve (AUC) by 16% compared to baseline models. For Parkinson's disease classification, EVIDENCE achieved near-perfect precision and sensitivity, with a macro average F1-Score of 0.997. In the GTZAN, EVIDENCE maintained a high AUC of 0.996, demonstrating its efficacy in filtering relevant features for accurate genre classification. EVIDENCE outperformed other Explainable Artificial Intelligence (XAI) methods such as LIME, SHAP, and GradCAM in almost all metrics. These findings indicate that EVIDENCE not only improves classification accuracy but also provides a transparent and reproducible explanation mechanism, crucial for advancing the trustworthiness and applicability of AI systems in real-world settings.
Problem

Research questions and friction points this paper is trying to address.

AI Decision Transparency
Critical Information Identification
Trust in AI Applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

EVIDENCE
Interpretable AI
Performance Enhancement
🔎 Similar Papers
No similar papers found.