🤖 AI Summary
This paper addresses the poor interpretability and limited accuracy of traditional multilayer perceptrons (MLPs) by proposing Kolmogorov–Arnold Networks (KANs), a novel neural architecture grounded in the Kolmogorov–Arnold representation theorem. Unlike MLPs—which employ fixed activation functions and linear weight layers—KANs parameterize learnable B-spline functions on **edges**, enabling flexible nonlinear modeling; nodes perform only summation, with no weights or activations. This design yields three key contributions: (1) Both theoretical analysis and empirical evaluation demonstrate superior neural scaling laws: small KANs significantly outperform large MLPs in data fitting and partial differential equation solving. (2) KANs enable direct parameter visualization and semantic interpretability, facilitating human–machine collaborative scientific discovery. (3) KANs constitute the first general-purpose neural network paradigm featuring edge-level learnable activations.
📝 Abstract
Inspired by the Kolmogorov-Arnold representation theorem, we propose Kolmogorov-Arnold Networks (KANs) as promising alternatives to Multi-Layer Perceptrons (MLPs). While MLPs have fixed activation functions on nodes ("neurons"), KANs have learnable activation functions on edges ("weights"). KANs have no linear weights at all -- every weight parameter is replaced by a univariate function parametrized as a spline. We show that this seemingly simple change makes KANs outperform MLPs in terms of accuracy and interpretability. For accuracy, much smaller KANs can achieve comparable or better accuracy than much larger MLPs in data fitting and PDE solving. Theoretically and empirically, KANs possess faster neural scaling laws than MLPs. For interpretability, KANs can be intuitively visualized and can easily interact with human users. Through two examples in mathematics and physics, KANs are shown to be useful collaborators helping scientists (re)discover mathematical and physical laws. In summary, KANs are promising alternatives for MLPs, opening opportunities for further improving today's deep learning models which rely heavily on MLPs.