Explainable Diagnosis Prediction through Neuro-Symbolic Integration

📅 2024-10-01
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The “black-box” nature of medical AI models undermines clinical trust and hinders real-world deployment. Method: This paper proposes an interpretable neuro-symbolic model that integrates medical prior knowledge with data-driven learning, introducing Logic Neural Networks (LNNs) to diabetes prediction for the first time. The model embeds structured clinical rules via a learnable threshold mechanism and employs a multi-path, full-feature fusion architecture to jointly optimize accuracy and interpretability. Contribution/Results: Evaluated on a real-world diabetes prediction task, the model achieves 80.52% accuracy and an AUROC of 0.8457—significantly outperforming logistic regression, SVM, and random forests. Crucially, its learned weights and thresholds admit direct mapping to established clinical decision criteria, enabling symbol-level interpretability without sacrificing predictive performance. This work establishes a novel paradigm for co-optimizing accuracy and clinical interpretability in medical AI.

Technology Category

Application Category

📝 Abstract
Diagnosis prediction is a critical task in healthcare, where timely and accurate identification of medical conditions can significantly impact patient outcomes. Traditional machine learning and deep learning models have achieved notable success in this domain but often lack interpretability which is a crucial requirement in clinical settings. In this study, we explore the use of neuro-symbolic methods, specifically Logical Neural Networks (LNNs), to develop explainable models for diagnosis prediction. Essentially, we design and implement LNN-based models that integrate domain-specific knowledge through logical rules with learnable thresholds. Our models, particularly $M_{ ext{multi-pathway}}$ and $M_{ ext{comprehensive}}$, demonstrate superior performance over traditional models such as Logistic Regression, SVM, and Random Forest, achieving higher accuracy (up to 80.52%) and AUROC scores (up to 0.8457) in the case study of diabetes prediction. The learned weights and thresholds within the LNN models provide direct insights into feature contributions, enhancing interpretability without compromising predictive power. These findings highlight the potential of neuro-symbolic approaches in bridging the gap between accuracy and explainability in healthcare AI applications. By offering transparent and adaptable diagnostic models, our work contributes to the advancement of precision medicine and supports the development of equitable healthcare solutions. Future research will focus on extending these methods to larger and more diverse datasets to further validate their applicability across different medical conditions and populations.
Problem

Research questions and friction points this paper is trying to address.

Predictive Model
Interpretable AI
Healthcare Application
Innovation

Methods, ideas, or system contributions that make the work stand out.

Logical Neural Networks
Interpretable Machine Learning
Disease Prediction
🔎 Similar Papers
No similar papers found.
Q
Qiuhao Lu
McWilliams School of Biomedical Informatics, The University of Texas Health Science Center, Houston, TX, USA
R
Rui Li
McWilliams School of Biomedical Informatics, The University of Texas Health Science Center, Houston, TX, USA
Elham Sagheb
Elham Sagheb
Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, MN, USA
Andrew Wen
Andrew Wen
Data Scientist II, University of Texas Health Sciences Center at Houston | PhD Student @ Rice
Big DataDigital MedicineNatural Language ProcessingClinical NLPInformation Retrieval
J
Jinlian Wang
McWilliams School of Biomedical Informatics, The University of Texas Health Science Center, Houston, TX, USA
L
Liwei Wang
McWilliams School of Biomedical Informatics, The University of Texas Health Science Center, Houston, TX, USA
Jungwei W. Fan
Jungwei W. Fan
Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, MN, USA
H
Hongfang Liu
McWilliams School of Biomedical Informatics, The University of Texas Health Science Center, Houston, TX, USA