KAN-SAs: Efficient Acceleration of Kolmogorov-Arnold Networks on Systolic Arrays

📅 2025-11-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Kolmogorov–Arnold Networks (KANs) offer high parameter efficiency and interpretability, but their recursively defined, learnable B-spline activation functions hinder efficient acceleration on systolic arrays (SAs), resulting in low hardware utilization (~30%). Method: This work proposes KAN-SA, the first architecture to enable efficient KAN acceleration on SAs. It introduces a novel non-recursive B-spline computation scheme and jointly exploits KAN’s intrinsic sparsity to maximize SA resource utilization. Implemented in 28 nm FD-SOI technology, KAN-SA supports hybrid inference of KANs and conventional DNNs. Contribution/Results: Under area-equivalent constraints, KAN-SA reduces KAN inference latency by up to 50% in clock cycles compared to a standard SA, while achieving full 100% SA utilization—marking the first dedicated, hardware-efficient accelerator for KAN inference.

Technology Category

Application Category

📝 Abstract
Kolmogorov-Arnold Networks (KANs) have garnered significant attention for their promise of improved parameter efficiency and explainability compared to traditional Deep Neural Networks (DNNs). KANs'key innovation lies in the use of learnable non-linear activation functions, which are parametrized as splines. Splines are expressed as a linear combination of basis functions (B-splines). B-splines prove particularly challenging to accelerate due to their recursive definition. Systolic Array (SA)based architectures have shown great promise as DNN accelerators thanks to their energy efficiency and low latency. However, their suitability and efficiency in accelerating KANs have never been assessed. Thus, in this work, we explore the use of SA architecture to accelerate the KAN inference. We show that, while SAs can be used to accelerate part of the KAN inference, their utilization can be reduced to 30%. Hence, we propose KAN-SAs, a novel SA-based accelerator that leverages intrinsic properties of B-splines to enable efficient KAN inference. By including a nonrecursive B-spline implementation and leveraging the intrinsic KAN sparsity, KAN-SAs enhances conventional SAs, enabling efficient KAN inference, in addition to conventional DNNs. KAN-SAs achieves up to 100% SA utilization and up to 50% clock cycles reduction compared to conventional SAs of equivalent area, as shown by hardware synthesis results on a 28nm FD-SOI technology. We also evaluate different configurations of the accelerator on various KAN applications, confirming the improved efficiency of KAN inference provided by KAN-SAs.
Problem

Research questions and friction points this paper is trying to address.

Accelerating KAN inference using Systolic Arrays efficiently.
Overcoming B-spline challenges in KANs for hardware acceleration.
Enhancing SA utilization and reducing cycles for KANs.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging B-spline properties for efficient KAN acceleration
Introducing nonrecursive B-spline implementation to enhance systolic arrays
Utilizing KAN sparsity to maximize systolic array utilization
🔎 Similar Papers
No similar papers found.