🤖 AI Summary
Existing XAI tools for high-stakes medical diagnosis rely on users’ secondary interpretation, exacerbating the comprehension gap and undermining trust. This paper addresses cardiac auscultation diagnosis by proposing DiagramNet—a novel ante-hoc interpretable model that uniquely integrates domain ontology–driven schematic representation with abductive reasoning to automatically generate clinically intelligible murmur waveform explanations. DiagramNet jointly generates diagnostic hypotheses and produces visualizations explicitly aligned with clinical knowledge, substantially narrowing the interpretability gap between model outputs and expert understanding. Experimental results demonstrate that DiagramNet outperforms baseline models in diagnostic accuracy. A user study with practicing physicians confirms significantly higher trust in its schematic explanations compared to heatmap-based alternatives; moreover, explanation faithfulness and diagnostic credibility co-improve. This work advances trustworthy, clinically grounded XAI by bridging symbolic domain knowledge with data-driven inference in an end-to-end, human-centered framework.
📝 Abstract
Many visualizations have been developed for explainable AI (XAI), but they often require further reasoning by users to interpret. Investigating XAI for high-stakes medical diagnosis, we propose improving domain alignment with diagrammatic and abductive reasoning to reduce the interpretability gap. We developed DiagramNet to predict cardiac diagnoses from heart auscultation, select the best-fitting hypothesis based on criteria evaluation, and explain with clinically-relevant murmur diagrams. The ante-hoc interpretable model leverages domain-relevant ontology, representation, and reasoning process to increase trust in expert users. In modeling studies, we found that DiagramNet not only provides faithful murmur shape explanations, but also has better performance than baseline models. We demonstrate the interpretability and trustworthiness of diagrammatic, abductive explanations in a qualitative user study with medical students, showing that clinically-relevant, diagrammatic explanations are preferred over technical saliency map explanations. This work contributes insights into providing domain-aligned explanations for user-centric XAI in complex domains.