An interpretable prototype parts-based neural network for medical tabular data

📅 2026-03-05
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of limited clinical trust in machine learning models for medical tabular data due to poor interpretability. It introduces an intrinsically interpretable neural network by adapting the concept of prototype parts—originally developed in computer vision—to the domain of medical tabular data. The model discretizes inputs through trainable feature patches, learns prototype parts grounded in clinical semantics, and performs case-based comparisons between patient features and these prototypes in a latent space, yielding transparent predictions articulated in clinically meaningful language. Evaluated across multiple medical benchmark datasets, the approach achieves classification performance on par with state-of-the-art models while generating human-readable explanations that align with clinical reasoning.

Technology Category

Application Category

📝 Abstract
The ability to interpret machine learning model decisions is critical in such domains as healthcare, where trust in model predictions is as important as their accuracy. Inspired by the development of prototype parts-based deep neural networks in computer vision, we propose a new model for tabular data, specifically tailored to medical records, that requires discretization of diagnostic result norms. Unlike the original vision models that rely on the spatial structure, our method employs trainable patching over features describing a patient, to learn meaningful prototypical parts from structured data. These parts are represented as binary or discretized feature subsets. This allows the model to express prototypes in human-readable terms, enabling alignment with clinical language and case-based reasoning. Our proposed neural network is inherently interpretable and offers interpretable concept-based predictions by comparing the patient's description to learned prototypes in the latent space of the network. In experiments, we demonstrate that the model achieves classification performance competitive to widely used baseline models on medical benchmark datasets, while also offering transparency, bridging the gap between predictive performance and interpretability in clinical decision support.
Problem

Research questions and friction points this paper is trying to address.

interpretability
medical tabular data
prototype-based models
clinical decision support
explainable AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

prototype learning
interpretable AI
tabular data
medical records
discretization
🔎 Similar Papers
No similar papers found.
J
Jacek Karolczak
Poznan University of Technology, Institute of Computing Science, ul. Piotrowo 2, 60-695 Poznań, Poland
Jerzy Stefanowski
Jerzy Stefanowski
Poznan University of Technology
machine learningdata streamsExplainable AIrule learningimbalanced classification