KANELÉ: Kolmogorov-Arnold Networks for Efficient LUT-based Evaluation

📅 2025-12-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low-latency and low-power requirements of neural network inference on FPGAs, this paper presents the first systematic FPGA implementation framework for Kolmogorov-Arnold Networks (KANs). Leveraging the discrete-friendly property of KANs—namely, their learnable 1D B-spline activations—we design a custom LUT-mapping mechanism and jointly optimize training, quantization, and pruning, thereby transcending the conventional MLP-LUT paradigm. Our approach achieves high accuracy while significantly improving computational density and resource efficiency: it accelerates inference by 2700× over existing KAN-FPGA implementations and reduces logic resource usage by an order of magnitude. On symbolic regression and physics-informed formula modeling tasks, it matches or surpasses state-of-the-art LUT-based architectures. The framework has been successfully deployed in real-time, low-power control systems.

Technology Category

Application Category

📝 Abstract
Low-latency, resource-efficient neural network inference on FPGAs is essential for applications demanding real-time capability and low power. Lookup table (LUT)-based neural networks are a common solution, combining strong representational power with efficient FPGA implementation. In this work, we introduce KANELÉ, a framework that exploits the unique properties of Kolmogorov-Arnold Networks (KANs) for FPGA deployment. Unlike traditional multilayer perceptrons (MLPs), KANs employ learnable one-dimensional splines with fixed domains as edge activations, a structure naturally suited to discretization and efficient LUT mapping. We present the first systematic design flow for implementing KANs on FPGAs, co-optimizing training with quantization and pruning to enable compact, high-throughput, and low-latency KAN architectures. Our results demonstrate up to a 2700x speedup and orders of magnitude resource savings compared to prior KAN-on-FPGA approaches. Moreover, KANELÉ matches or surpasses other LUT-based architectures on widely used benchmarks, particularly for tasks involving symbolic or physical formulas, while balancing resource usage across FPGA hardware. Finally, we showcase the versatility of the framework by extending it to real-time, power-efficient control systems.
Problem

Research questions and friction points this paper is trying to address.

Develops efficient FPGA-based neural networks using KANs
Optimizes KANs for low-latency, high-throughput FPGA deployment
Enables real-time, power-efficient control with LUT-based inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

KANELÉ uses Kolmogorov-Arnold Networks for FPGA deployment
It co-optimizes training with quantization and pruning
It enables compact, high-throughput, low-latency KAN architectures
🔎 Similar Papers
No similar papers found.