🤖 AI Summary
This work proposes DualFlexKAN, a novel architecture that overcomes key limitations of both traditional multilayer perceptrons (MLPs) and existing Kolmogorov–Arnold networks (KANs). While MLPs are constrained by fixed activation functions, current KANs suffer from quadratic parameter growth, structural rigidity, and difficulties in regularization. DualFlexKAN introduces a decoupled control mechanism for input transformation and output activation through a two-stage nonlinear modeling framework, achieving an optimal trade-off between expressive power and computational efficiency. The design supports diverse basis functions—including orthogonal polynomials, B-splines, and radial basis functions—and enables flexible regularization strategies, substantially mitigating parameter explosion. Experiments demonstrate that DualFlexKAN outperforms both MLPs and standard KANs in regression, physics-informed modeling, and function approximation tasks, achieving superior accuracy, faster convergence, and improved gradient fidelity with 1–2 orders of magnitude fewer parameters.
📝 Abstract
Multi-Layer Perceptrons (MLPs) rely on pre-defined, fixed activation functions, imposing a static inductive bias that forces the network to approximate complex topologies solely through increased depth and width. Kolmogorov-Arnold Networks (KANs) address this limitation through edge-centric learnable functions, yet their formulation suffers from quadratic parameter scaling and architectural rigidity that hinders the effective integration of standard regularization techniques. This paper introduces the DualFlexKAN (DFKAN), a flexible architecture featuring a dual-stage mechanism that independently controls pre-linear input transformations and post-linear output activations. This decoupling enables hybrid networks that optimize the trade-off between expressiveness and computational cost. Unlike standard formulations, DFKAN supports diverse basis function families, including orthogonal polynomials, B-splines, and radial basis functions, integrated with configurable regularization strategies that stabilize training dynamics. Comprehensive evaluations across regression benchmarks, physics-informed tasks, and function approximation demonstrate that DFKAN outperforms both MLPs and conventional KANs in accuracy, convergence speed, and gradient fidelity. The proposed hybrid configurations achieve superior performance with one to two orders of magnitude fewer parameters than standard KANs, effectively mitigating the parameter explosion problem while preserving KAN-style expressiveness. DFKAN provides a principled, scalable framework for incorporating adaptive non-linearities, proving particularly advantageous for data-efficient learning and interpretable function discovery in scientific applications.