🤖 AI Summary
This work addresses the absence of a hardware-agnostic methodology for evaluating the inference complexity of Kolmogorov–Arnold Networks (KANs), which hinders early-stage architectural design and cross-platform comparisons. To bridge this gap, we introduce the first unified and analytically tractable complexity metric framework based on platform-independent operations—namely real-number multiplications (RM), bit operations (BOP), and additions with bit-shifts (NABS)—applicable across diverse KAN variants including B-spline, GRBF, Chebyshev, and Fourier formulations. Our approach enables direct computation of computational complexity solely from network architecture, bypassing hardware-specific synthesis pipelines. This facilitates rapid and fair efficiency comparisons between KANs and conventional neural networks, offering principled guidance for model selection in latency- and power-constrained applications.
📝 Abstract
Kolmogorov-Arnold Networks (KANs) have recently emerged as a powerful architecture for various machine learning applications. However, their unique structure raises significant concerns regarding their computational overhead. Existing studies primarily evaluate KAN complexity in terms of Floating-Point Operations (FLOPs) required for GPU-based training and inference. However, in many latency-sensitive and power-constrained deployment scenarios, such as neural network-driven non-linearity mitigation in optical communications or channel state estimation in wireless communications, training is performed offline and dedicated hardware accelerators are preferred over GPUs for inference. Recent hardware implementation studies report KAN complexity using platform-specific resource consumption metrics, such as Look-Up Tables, Flip-Flops, and Block RAMs. However, these metrics require a full hardware design and synthesis stage that limits their utility for early-stage architectural decisions and cross-platform comparisons. To address this, we derive generalized, platform-independent formulae for evaluating the hardware inference complexity of KANs in terms of Real Multiplications (RM), Bit Operations (BOP), and Number of Additions and Bit-Shifts (NABS). We extend our analysis across multiple KAN variants, including B-spline, Gaussian Radial Basis Function (GRBF), Chebyshev, and Fourier KANs. The proposed metrics can be computed directly from the network structure and enable a fair and straightforward inference complexity comparison between KAN and other neural network architectures.