FEKAN: Feature-Enriched Kolmogorov-Arnold Networks

📅 2026-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes Feature-Enhanced Kolmogorov–Arnold Networks (FEKAN) to address the limitations of existing Kolmogorov–Arnold Networks (KANs), which suffer from high computational cost and slow convergence, hindering their scalability. FEKAN enhances model expressivity and accelerates convergence through a feature augmentation mechanism without introducing additional trainable parameters, while preserving the interpretability inherent to KANs. Grounded in the Kolmogorov–Arnold representation theorem, FEKAN is well-suited for tasks such as function approximation, physics-informed neural networks, and neural operators. Experimental results demonstrate that FEKAN consistently outperforms current KAN variants across multiple benchmarks, achieving both higher accuracy and faster convergence.

Technology Category

Application Category

📝 Abstract
Kolmogorov-Arnold Networks (KANs) have recently emerged as a compelling alternative to multilayer perceptrons, offering enhanced interpretability via functional decomposition. However, existing KAN architectures, including spline-, wavelet-, radial-basis variants, etc., suffer from high computational cost and slow convergence, limiting scalability and practical applicability. Here, we introduce Feature-Enriched Kolmogorov-Arnold Networks (FEKAN), a simple yet effective extension that preserves all the advantages of KAN while improving computational efficiency and predictive accuracy through feature enrichment, without increasing the number of trainable parameters. By incorporating these additional features, FEKAN accelerates convergence, increases representation capacity, and substantially mitigates the computational overhead characteristic of state-of-the-art KAN architectures. We investigate FEKAN across a comprehensive set of benchmarks, including function-approximation tasks, physics-informed formulations for diverse partial differential equations (PDEs), and neural operator settings that map between input and output function spaces. For function approximation, we systematically compare FEKAN against a broad family of KAN variants, FastKAN, WavKAN, ReLUKAN, HRKAN, ChebyshevKAN, RBFKAN, and the original SplineKAN. Across all tasks, FEKAN demonstrates substantially faster convergence and consistently higher approximation accuracy than the underlying baseline architectures. We also establish the theoretical foundations for FEKAN, showing its superior representation capacity compared to KAN, which contributes to improved accuracy and efficiency.
Problem

Research questions and friction points this paper is trying to address.

Kolmogorov-Arnold Networks
computational cost
slow convergence
scalability
practical applicability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Feature Enrichment
Kolmogorov-Arnold Networks
Computational Efficiency
Representation Capacity
Convergence Acceleration
S
Sidharth S. Menon
Aerospace Engineering Department, Worcester Polytechnic Institute, Worcester, MA 01609, USA
Ameya D. Jagtap
Ameya D. Jagtap
Assistant Professor, WPI | Brown University | TIFR-CAM | IISc
AI4ScienceScientific Machine LearningScientific ComputationFoundation Models