KAC: Kolmogorov-Arnold Classifier for Continual Learning

šŸ“… 2025-03-27
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
To address classification space drift and catastrophic forgetting induced by linear classifiers in continual learning, this paper proposes the Kolmogorov–Arnold Classifier (KAC)—the first learnable nonlinear classifier for continual learning classification tasks that incorporates the Kolmogorov–Arnold Network (KAN). KAC innovatively integrates learnable spline activations with radial basis functions (RBFs), substantially improving cross-task representation stability and incremental scalability. Evaluated on standard benchmarks—including Split-CIFAR100, ImageNet-R, and Tiny-ImageNet—KAC seamlessly integrates with multiple continual learning frameworks (e.g., EWC, DER, LwF), consistently outperforming linear counterparts. It achieves average accuracy gains of 2.1%–4.7% across settings while demonstrating superior generalization robustness. This work establishes a novel architectural paradigm for nonlinear decision boundaries in continual learning, offering both theoretical grounding—via the Kolmogorov–Arnold superposition theorem—and practical efficacy in mitigating representational degradation over task sequences.

Technology Category

Application Category

šŸ“ Abstract
Continual learning requires models to train continuously across consecutive tasks without forgetting. Most existing methods utilize linear classifiers, which struggle to maintain a stable classification space while learning new tasks. Inspired by the success of Kolmogorov-Arnold Networks (KAN) in preserving learning stability during simple continual regression tasks, we set out to explore their potential in more complex continual learning scenarios. In this paper, we introduce the Kolmogorov-Arnold Classifier (KAC), a novel classifier developed for continual learning based on the KAN structure. We delve into the impact of KAN's spline functions and introduce Radial Basis Functions (RBF) for improved compatibility with continual learning. We replace linear classifiers with KAC in several recent approaches and conduct experiments across various continual learning benchmarks, all of which demonstrate performance improvements, highlighting the effectiveness and robustness of KAC in continual learning. The code is available at https://github.com/Ethanhuhuhu/KAC.
Problem

Research questions and friction points this paper is trying to address.

Address forgetting in continual learning with KAC
Replace linear classifiers for stable classification space
Enhance performance using spline and RBF functions
Innovation

Methods, ideas, or system contributions that make the work stand out.

KAC replaces linear classifiers with KAN structure
Introduces RBF for better continual learning compatibility
Demonstrates performance gains in various benchmarks
šŸ”Ž Similar Papers
Y
Yusong Hu
VCIP, CS, Nankai University
Zichen Liang
Zichen Liang
Nankai University
Computer VisionEmbodied AIMLLMs
F
Fei Yang
VCIP, CS, Nankai University; NKIARI, Shenzhen Futian
Qibin Hou
Qibin Hou
Nankai University
Deep learningComputer visionVisual attention
X
Xialei Liu
VCIP, CS, Nankai University; NKIARI, Shenzhen Futian
Ming-Ming Cheng
Ming-Ming Cheng
Professor of Computer Science, Nankai University
Computer VisionComputer GraphicsVisual AttentionSaliency