Optimized Architectures for Kolmogorov-Arnold Networks

📅 2025-12-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Kolmogorov–Arnold Networks (KANs) face an inherent trade-off between expressive power and mathematical interpretability. Method: This paper proposes an end-to-end differentiable neural architecture optimization framework that jointly employs over-parameterized initialization and a novel differentiable sparsification strategy to drive structural search, enabling automatic learning of compact, interpretable, and high-accuracy KAN topologies during training. Crucially, the method preserves KANs’ exact mathematical interpretability while overcoming their traditional expressivity limitations. Results: Experiments on function approximation, dynamical system forecasting, and real-world tasks demonstrate that the learned models achieve state-of-the-art or competitive accuracy, with significantly reduced parameter counts. This work marks the first approach to simultaneously achieve strong expressivity and high interpretability in KANs.

Technology Category

Application Category

📝 Abstract
Efforts to improve Kolmogorov-Arnold networks (KANs) with architectural enhancements have been stymied by the complexity those enhancements bring, undermining the interpretability that makes KANs attractive in the first place. Here we study overprovisioned architectures combined with sparsification to learn compact, interpretable KANs without sacrificing accuracy. Crucially, we focus on differentiable sparsification, turning architecture search into an end-to-end optimization problem. Across function approximation benchmarks, dynamical systems forecasting, and real-world prediction tasks, we demonstrate competitive or superior accuracy while discovering substantially smaller models. Overprovisioning and sparsification are synergistic, with the combination outperforming either alone. The result is a principled path toward models that are both more expressive and more interpretable, addressing a key tension in scientific machine learning.
Problem

Research questions and friction points this paper is trying to address.

Develop compact, interpretable Kolmogorov-Arnold networks via overprovisioning and sparsification
Address the trade-off between architectural complexity and interpretability in KANs
Enable end-to-end optimization for accurate yet simplified scientific machine learning models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Overprovisioned architectures combined with sparsification
Differentiable sparsification enabling end-to-end optimization
Synergistic approach for compact, interpretable models
🔎 Similar Papers
No similar papers found.