KAN: Kolmogorov-Arnold Networks

📅 2024-04-30
🏛️ arXiv.org
📈 Citations: 249
Influential: 66
📄 PDF
🤖 AI Summary
This paper addresses the poor interpretability and limited accuracy of traditional multilayer perceptrons (MLPs) by proposing Kolmogorov–Arnold Networks (KANs), a novel neural architecture grounded in the Kolmogorov–Arnold representation theorem. Unlike MLPs—which employ fixed activation functions and linear weight layers—KANs parameterize learnable B-spline functions on **edges**, enabling flexible nonlinear modeling; nodes perform only summation, with no weights or activations. This design yields three key contributions: (1) Both theoretical analysis and empirical evaluation demonstrate superior neural scaling laws: small KANs significantly outperform large MLPs in data fitting and partial differential equation solving. (2) KANs enable direct parameter visualization and semantic interpretability, facilitating human–machine collaborative scientific discovery. (3) KANs constitute the first general-purpose neural network paradigm featuring edge-level learnable activations.

Technology Category

Application Category

📝 Abstract
Inspired by the Kolmogorov-Arnold representation theorem, we propose Kolmogorov-Arnold Networks (KANs) as promising alternatives to Multi-Layer Perceptrons (MLPs). While MLPs have fixed activation functions on nodes ("neurons"), KANs have learnable activation functions on edges ("weights"). KANs have no linear weights at all -- every weight parameter is replaced by a univariate function parametrized as a spline. We show that this seemingly simple change makes KANs outperform MLPs in terms of accuracy and interpretability. For accuracy, much smaller KANs can achieve comparable or better accuracy than much larger MLPs in data fitting and PDE solving. Theoretically and empirically, KANs possess faster neural scaling laws than MLPs. For interpretability, KANs can be intuitively visualized and can easily interact with human users. Through two examples in mathematics and physics, KANs are shown to be useful collaborators helping scientists (re)discover mathematical and physical laws. In summary, KANs are promising alternatives for MLPs, opening opportunities for further improving today's deep learning models which rely heavily on MLPs.
Problem

Research questions and friction points this paper is trying to address.

Proposing Kolmogorov-Arnold Networks as MLP alternatives.
Enhancing accuracy and interpretability in neural networks.
Facilitating discovery in mathematics and physics.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learnable edge activation functions
Replaced weights with spline functions
Improved accuracy and interpretability
🔎 Similar Papers
No similar papers found.
Z
Ziming Liu
Massachusetts Institute of Technology, The NSF Institute for Artificial Intelligence and Fundamental Interactions
Y
Yixuan Wang
California Institute of Technology
S
Sachin Vaidya
Massachusetts Institute of Technology, The NSF Institute for Artificial Intelligence and Fundamental Interactions
F
Fabian Ruehle
Northeastern University, The NSF Institute for Artificial Intelligence and Fundamental Interactions
J
James Halverson
Northeastern University, The NSF Institute for Artificial Intelligence and Fundamental Interactions
Marin Soljacic
Marin Soljacic
Professor of Physics, MIT
nanophotonicsphotonic crystalsnonlinear opticswireless power transfer
Thomas Y. Hou
Thomas Y. Hou
California Institute of Technology
numerical analysismultiscale problemsnonlinear partial differential equations
Max Tegmark
Max Tegmark
Professor of Physics, MIT
Physics