Ultrafast On-chip Online Learning via Spline Locality in Kolmogorov-Arnold Networks

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of online learning in high-frequency systems—such as quantum computing and nuclear fusion control—where conventional multilayer perceptrons (MLPs) fail to meet stringent requirements of sub-microsecond latency, fixed-point precision, and strict memory constraints. The authors propose an ultrafast on-chip learning architecture based on Kolmogorov–Arnold Networks (KANs), leveraging the locality of B-splines to enable sparse parameter updates. Combined with fixed-point quantization, the design is efficiently deployed on FPGA hardware. To the best of the authors’ knowledge, this is the first demonstration of model-free online learning at sub-microsecond speeds. The approach substantially outperforms MLPs in both efficiency and representational capacity, while also exhibiting strong numerical robustness under fixed-point arithmetic and resource-efficient scalability.

Technology Category

Application Category

📝 Abstract
Ultrafast online learning is essential for high-frequency systems, such as controls for quantum computing and nuclear fusion, where adaptation must occur on sub-microsecond timescales. Meeting these requirements demands low-latency, fixed-precision computation under strict memory constraints, a regime in which conventional Multi-Layer Perceptrons (MLPs) are both inefficient and numerically unstable. We identify key properties of Kolmogorov-Arnold Networks (KANs) that align with these constraints. Specifically, we show that: (i) KAN updates exploiting B-spline locality are sparse, enabling superior on-chip resource scaling, and (ii) KANs are inherently robust to fixed-point quantization. By implementing fixed-point online training on Field-Programmable Gate Arrays (FPGAs), a representative platform for on-chip computation, we demonstrate that KAN-based online learners are significantly more efficient and expressive than MLPs across a range of low-latency and resource-constrained tasks. To our knowledge, this work is the first to demonstrate model-free online learning at sub-microsecond latencies.
Problem

Research questions and friction points this paper is trying to address.

ultrafast online learning
sub-microsecond latency
fixed-precision computation
memory constraints
high-frequency systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Kolmogorov-Arnold Networks
online learning
B-spline locality
fixed-point quantization
FPGA