🤖 AI Summary
Kolmogorov–Arnold Networks (KANs) suffer from high training costs and limited efficacy in multiscale modeling. Method: We propose Finite-Basis KANs (FBKANs), the first KAN variant integrating domain decomposition and finite-basis representation: it partitions the global domain into subdomains, trains local KANs in parallel, and employs learnable basis functions to enable parameter sharing and embed physical constraints. FBKAN synergistically combines Kolmogorov–Arnold-theorem-inspired learnable activations, adaptive domain partitioning, finite-basis expansions, and physics-informed loss optimization. Contribution/Results: Experiments demonstrate that FBKAN significantly outperforms standard KANs and MLPs in noisy function approximation and partial differential equation solving—achieving higher accuracy, accelerating training by multiple-fold, and exhibiting superior generalization, robustness to noise, and scalability across problem sizes and domains.
📝 Abstract
Kolmogorov-Arnold networks (KANs) have attracted attention recently as an alternative to multilayer perceptrons (MLPs) for scientific machine learning. However, KANs can be expensive to train, even for relatively small networks. Inspired by finite basis physics-informed neural networks (FBPINNs), in this work, we develop a domain decomposition method for KANs that allows for several small KANs to be trained in parallel to give accurate solutions for multiscale problems. We show that finite basis KANs (FBKANs) can provide accurate results with noisy data and for physics-informed training.