🤖 AI Summary
This work addresses the challenge of limited clinical trust in machine learning models for medical tabular data due to poor interpretability. It introduces an intrinsically interpretable neural network by adapting the concept of prototype parts—originally developed in computer vision—to the domain of medical tabular data. The model discretizes inputs through trainable feature patches, learns prototype parts grounded in clinical semantics, and performs case-based comparisons between patient features and these prototypes in a latent space, yielding transparent predictions articulated in clinically meaningful language. Evaluated across multiple medical benchmark datasets, the approach achieves classification performance on par with state-of-the-art models while generating human-readable explanations that align with clinical reasoning.
📝 Abstract
The ability to interpret machine learning model decisions is critical in such domains as healthcare, where trust in model predictions is as important as their accuracy. Inspired by the development of prototype parts-based deep neural networks in computer vision, we propose a new model for tabular data, specifically tailored to medical records, that requires discretization of diagnostic result norms. Unlike the original vision models that rely on the spatial structure, our method employs trainable patching over features describing a patient, to learn meaningful prototypical parts from structured data. These parts are represented as binary or discretized feature subsets. This allows the model to express prototypes in human-readable terms, enabling alignment with clinical language and case-based reasoning. Our proposed neural network is inherently interpretable and offers interpretable concept-based predictions by comparing the patient's description to learned prototypes in the latent space of the network. In experiments, we demonstrate that the model achieves classification performance competitive to widely used baseline models on medical benchmark datasets, while also offering transparency, bridging the gap between predictive performance and interpretability in clinical decision support.