Finite basis Kolmogorov-Arnold networks: domain decomposition for data-driven and physics-informed problems

📅 2024-06-28
🏛️ arXiv.org
📈 Citations: 31
Influential: 1
📄 PDF
🤖 AI Summary
Kolmogorov–Arnold Networks (KANs) suffer from high training costs and limited efficacy in multiscale modeling. Method: We propose Finite-Basis KANs (FBKANs), the first KAN variant integrating domain decomposition and finite-basis representation: it partitions the global domain into subdomains, trains local KANs in parallel, and employs learnable basis functions to enable parameter sharing and embed physical constraints. FBKAN synergistically combines Kolmogorov–Arnold-theorem-inspired learnable activations, adaptive domain partitioning, finite-basis expansions, and physics-informed loss optimization. Contribution/Results: Experiments demonstrate that FBKAN significantly outperforms standard KANs and MLPs in noisy function approximation and partial differential equation solving—achieving higher accuracy, accelerating training by multiple-fold, and exhibiting superior generalization, robustness to noise, and scalability across problem sizes and domains.

Technology Category

Application Category

📝 Abstract
Kolmogorov-Arnold networks (KANs) have attracted attention recently as an alternative to multilayer perceptrons (MLPs) for scientific machine learning. However, KANs can be expensive to train, even for relatively small networks. Inspired by finite basis physics-informed neural networks (FBPINNs), in this work, we develop a domain decomposition method for KANs that allows for several small KANs to be trained in parallel to give accurate solutions for multiscale problems. We show that finite basis KANs (FBKANs) can provide accurate results with noisy data and for physics-informed training.
Problem

Research questions and friction points this paper is trying to address.

Develop domain decomposition for Kolmogorov-Arnold networks
Enable parallel training of small KANs for multiscale problems
Provide accurate solutions with noisy data and physics-informed training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Domain decomposition method for Kolmogorov-Arnold networks
Parallel training of multiple small KANs
Accurate solutions for multiscale problems
🔎 Similar Papers
No similar papers found.
A
Amanda A. Howard
Pacific Northwest National Laboratory, Richland, WA 99354 USA
Bruno Jacob
Bruno Jacob
Pacific Northwest National Laboratory
Computational PhysicsScientific Machine Learning
S
Sarah H. Murphy
University of North Carolina, Charlotte, Charlotte, NC USA; Pacific Northwest National Laboratory, Richland, WA 99354 USA
Alexander Heinlein
Alexander Heinlein
Delft University of Technology (TU Delft)
numerical analysisdomain decomposition methodshigh-performance computingscientific machine learning
P
P. Stinis
Pacific Northwest National Laboratory, Richland, WA 99354 USA; University of Washington, Applied Mathematics, Seattle, WA USA; Brown University, Applied Mathematics, Providence, RI, 02912 USA