Interpretable Clinical Classification with Kolgomorov-Arnold Networks

📅 2025-09-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Clinical AI systems face limited adoption due to their intrinsic lack of interpretability, undermining physician trust. To address this, we propose two inherently interpretable neural models—Logistic-KAN and Kolmogorov-Arnold Additive Model (KAAM)—grounded in the Kolmogorov-Arnold representation theorem. These models employ functionally explicit, symbolic network architectures that yield fully transparent predictions: no post-hoc explanation tools are required, as outputs consist directly of human-readable mathematical formulas and patient-level similar-case retrieval. Evaluated on multiple public healthcare datasets, both models achieve accuracy competitive with or superior to state-of-the-art black-box methods—including XGBoost and deep neural networks—while ensuring end-to-end auditability of decision pathways. This work establishes a novel paradigm for clinically trustworthy AI, uniquely reconciling high predictive performance with strong, built-in interpretability.

Technology Category

Application Category

📝 Abstract
Why should a clinician trust an Artificial Intelligence (AI) prediction? Despite the increasing accuracy of machine learning methods in medicine, the lack of transparency continues to hinder their adoption in clinical practice. In this work, we explore Kolmogorov-Arnold Networks (KANs) for clinical classification tasks on tabular data. Unlike traditional neural networks, KANs are function-based architectures that offer intrinsic interpretability through transparent, symbolic representations. We introduce Logistic-KAN, a flexible generalization of logistic regression, and Kolmogorov-Arnold Additive Model (KAAM), a simplified additive variant that delivers transparent, symbolic formulas. Unlike black-box models that require post-hoc explainability tools, our models support built-in patient-level insights, intuitive visualizations, and nearest-patient retrieval. Across multiple health datasets, our models match or outperform standard baselines, while remaining fully interpretable. These results position KANs as a promising step toward trustworthy AI that clinicians can understand, audit, and act upon.
Problem

Research questions and friction points this paper is trying to address.

Addressing lack of transparency in AI predictions for clinical practice
Providing interpretable classification models instead of black-box approaches
Enabling clinician trust through transparent, symbolic representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Kolmogorov-Arnold Networks for clinical classification tasks
Logistic-KAN as flexible generalization of logistic regression
Kolmogorov-Arnold Additive Model delivers transparent symbolic formulas
🔎 Similar Papers
No similar papers found.
A
Alejandro Almodóvar
Information Processing and Telecommunications Center, ETS Ingenieros de Telecomunicación, Universidad Politécnica de Madrid, Av. Complutense, 30, 28040, Madrid, Spain
P
Patricia A. Apellániz
Information Processing and Telecommunications Center, ETS Ingenieros de Telecomunicación, Universidad Politécnica de Madrid, Av. Complutense, 30, 28040, Madrid, Spain
A
Alba Garrido
Information Processing and Telecommunications Center, ETS Ingenieros de Telecomunicación, Universidad Politécnica de Madrid, Av. Complutense, 30, 28040, Madrid, Spain
F
Fernando Fernández-Salvador
Information Processing and Telecommunications Center, ETS Ingenieros de Telecomunicación, Universidad Politécnica de Madrid, Av. Complutense, 30, 28040, Madrid, Spain
Santiago Zazo
Santiago Zazo
professor universidad politecnica de madrid
communications
J
Juan Parras
Information Processing and Telecommunications Center, ETS Ingenieros de Telecomunicación, Universidad Politécnica de Madrid, Av. Complutense, 30, 28040, Madrid, Spain