KAE: Kolmogorov-Arnold Auto-Encoder for Representation Learning

📅 2024-12-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional autoencoders suffer from limited capacity in modeling complex nonlinear relationships and lack structural interpretability in learned representations. To address these limitations, we propose the Kolmogorov–Arnold Auto-Encoder (KAE), the first autoencoder framework integrating a learnable-edge Kolmogorov–Arnold Network (KAN). KAE replaces conventional neuron-wise activations with learnable piecewise polynomial functions on edges, explicitly capturing high-order nonlinear interactions. This design achieves both strong representational power and inherent structural interpretability, while supporting end-to-end joint optimization. Extensive experiments across multiple benchmark datasets demonstrate that KAE significantly reduces reconstruction error and consistently outperforms standard autoencoders and existing KAN-based variants on downstream tasks—including retrieval (measured by mAP), classification (accuracy), and denoising (PSNR). These results validate KAE’s dual improvements in latent-space modeling fidelity and generalization capability.

Technology Category

Application Category

📝 Abstract
The Kolmogorov-Arnold Network (KAN) has recently gained attention as an alternative to traditional multi-layer perceptrons (MLPs), offering improved accuracy and interpretability by employing learnable activation functions on edges. In this paper, we introduce the Kolmogorov-Arnold Auto-Encoder (KAE), which integrates KAN with autoencoders (AEs) to enhance representation learning for retrieval, classification, and denoising tasks. Leveraging the flexible polynomial functions in KAN layers, KAE captures complex data patterns and non-linear relationships. Experiments on benchmark datasets demonstrate that KAE improves latent representation quality, reduces reconstruction errors, and achieves superior performance in downstream tasks such as retrieval, classification, and denoising, compared to standard autoencoders and other KAN variants. These results suggest KAE's potential as a useful tool for representation learning. Our code is available at url{https://github.com/SciYu/KAE/}.
Problem

Research questions and friction points this paper is trying to address.

Machine Learning Optimization
Nonlinear Pattern Recognition
Feature Quality Enhancement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Kolmogorov-Arnold Autoencoder
Learnable Activation Functions
Enhanced Representation Learning
Fangchen Yu
Fangchen Yu
Ph.D Candidate, The Chinese University of Hong Kong, Shenzhen
Satistical Machine LearningOptimizationAI for ScienceMLLM
Y
Yidong Lin
The Chinese University of Hong Kong, Shenzhen
Y
Yuqi Ma
The Chinese University of Hong Kong, Shenzhen
Z
Zhenghao Huang
The Chinese University of Hong Kong, Shenzhen
Wenye Li
Wenye Li
The Hong Kong University of Science and Technology (Guangzhou)
Statistical and Unsupervised Learning