Diagrammatization and Abduction to Improve AI Interpretability With Domain-Aligned Explanations for Medical Diagnosis

📅 2023-02-02
📈 Citations: 3
Influential: 1
📄 PDF
🤖 AI Summary
Existing XAI tools for high-stakes medical diagnosis rely on users’ secondary interpretation, exacerbating the comprehension gap and undermining trust. This paper addresses cardiac auscultation diagnosis by proposing DiagramNet—a novel ante-hoc interpretable model that uniquely integrates domain ontology–driven schematic representation with abductive reasoning to automatically generate clinically intelligible murmur waveform explanations. DiagramNet jointly generates diagnostic hypotheses and produces visualizations explicitly aligned with clinical knowledge, substantially narrowing the interpretability gap between model outputs and expert understanding. Experimental results demonstrate that DiagramNet outperforms baseline models in diagnostic accuracy. A user study with practicing physicians confirms significantly higher trust in its schematic explanations compared to heatmap-based alternatives; moreover, explanation faithfulness and diagnostic credibility co-improve. This work advances trustworthy, clinically grounded XAI by bridging symbolic domain knowledge with data-driven inference in an end-to-end, human-centered framework.
📝 Abstract
Many visualizations have been developed for explainable AI (XAI), but they often require further reasoning by users to interpret. Investigating XAI for high-stakes medical diagnosis, we propose improving domain alignment with diagrammatic and abductive reasoning to reduce the interpretability gap. We developed DiagramNet to predict cardiac diagnoses from heart auscultation, select the best-fitting hypothesis based on criteria evaluation, and explain with clinically-relevant murmur diagrams. The ante-hoc interpretable model leverages domain-relevant ontology, representation, and reasoning process to increase trust in expert users. In modeling studies, we found that DiagramNet not only provides faithful murmur shape explanations, but also has better performance than baseline models. We demonstrate the interpretability and trustworthiness of diagrammatic, abductive explanations in a qualitative user study with medical students, showing that clinically-relevant, diagrammatic explanations are preferred over technical saliency map explanations. This work contributes insights into providing domain-aligned explanations for user-centric XAI in complex domains.
Problem

Research questions and friction points this paper is trying to address.

Enhance AI interpretability in medical diagnosis
Develop DiagramNet for cardiac diagnosis prediction
Improve trust with domain-aligned diagrammatic explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

DiagramNet for cardiac diagnosis
Abductive reasoning in XAI
Clinically-relevant diagrammatic explanations
🔎 Similar Papers
No similar papers found.
Brian Y. Lim
Brian Y. Lim
Associate Professor, Department of Computer Science, National University of Singapore
Explainable AIHuman-Centered AIHuman-Computer InteractionUbiquitous ComputingMachine
J
Joseph P. Cahaly
Massachusetts Institute of Technology, USA
C
Chester Y. F. Sng
National University of Singapore, Singapore
A
Adam Chew
National University of Singapore, Singapore