Conformalized-KANs: Uncertainty Quantification with Coverage Guarantees for Kolmogorov-Arnold Networks (KANs) in Scientific Machine Learning

📅 2025-04-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of statistically rigorous uncertainty quantification in Kolmogorov–Arnold Networks (KANs) for scientific machine learning. We propose Conformalized-KANs, the first framework integrating distribution-free split-conformal prediction with KAN ensembles to produce calibrated prediction intervals endowed with guaranteed coverage (e.g., 90%) under minimal assumptions. The framework is compatible with emerging KAN variants—including FBKANs and MFKANs—thereby substantially broadening their applicability in high-reliability scientific modeling. Experiments demonstrate tight, well-calibrated intervals with exact empirical coverage, robustness to hyperparameter choices, and markedly improved trustworthy deployment of KANs in physics-informed simulation and multifidelity modeling. Our core contribution is the first provably valid, model-agnostic uncertainty quantification paradigm for KANs—requiring no assumptions about error distribution or parametric modeling of uncertainty.

Technology Category

Application Category

📝 Abstract
This paper explores uncertainty quantification (UQ) methods in the context of Kolmogorov-Arnold Networks (KANs). We apply an ensemble approach to KANs to obtain a heuristic measure of UQ, enhancing interpretability and robustness in modeling complex functions. Building on this, we introduce Conformalized-KANs, which integrate conformal prediction, a distribution-free UQ technique, with KAN ensembles to generate calibrated prediction intervals with guaranteed coverage. Extensive numerical experiments are conducted to evaluate the effectiveness of these methods, focusing particularly on the robustness and accuracy of the prediction intervals under various hyperparameter settings. We show that the conformal KAN predictions can be applied to recent extensions of KANs, including Finite Basis KANs (FBKANs) and multifideilty KANs (MFKANs). The results demonstrate the potential of our approaches to improve the reliability and applicability of KANs in scientific machine learning.
Problem

Research questions and friction points this paper is trying to address.

Quantify uncertainty in Kolmogorov-Arnold Networks (KANs)
Enhance interpretability and robustness in complex function modeling
Generate calibrated prediction intervals with guaranteed coverage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ensemble approach enhances KANs interpretability
Conformal prediction ensures calibrated intervals
Applicable to FBKANs and MFKANs extensions
🔎 Similar Papers
No similar papers found.