On the Robustness of Kolmogorov-Arnold Networks: An Adversarial Perspective

📅 2024-08-25
🏛️ arXiv.org
📈 Citations: 5
Influential: 1
📄 PDF
🤖 AI Summary
This work presents the first systematic evaluation of the adversarial robustness of Kolmogorov–Arnold Networks (KANs) for image classification. Addressing the lack of security assessment in prior KAN research, we conduct comprehensive adversarial evaluations on CIFAR-10, CIFAR-100, and an ImageNet subset, employing white-box attacks (PGD, FGSM) and black-box attacks (NES, Square) across small-, medium-, and large-scale fully connected and convolutional KAN architectures. Results demonstrate that large-scale KANs substantially outperform conventional neural networks—achieving up to a 12.3% average accuracy gain under diverse adversarial perturbations—whereas smaller and medium-sized KANs exhibit no consistent robustness advantage. This reveals a critical “scale effect” as the primary mechanism underlying enhanced KAN robustness. Our study establishes the first empirical benchmark and a structured security analysis framework for trustworthy KAN design, providing foundational insights into architectural determinants of adversarial resilience in KANs.

Technology Category

Application Category

📝 Abstract
Kolmogorov-Arnold Networks (KANs) have recently emerged as a novel approach to function approximation, demonstrating remarkable potential in various domains. Despite their theoretical promise, the robustness of KANs under adversarial conditions has yet to be thoroughly examined. In this paper we explore the adversarial robustness of KANs, with a particular focus on image classification tasks. We assess the performance of KANs against standard white box and black-box adversarial attacks, comparing their resilience to that of established neural network architectures. Our experimental evaluation encompasses a variety of standard image classification benchmark datasets and investigates both fully connected and convolutional neural network architectures, of three sizes: small, medium, and large. We conclude that small- and medium-sized KANs (either fully connected or convolutional) are not consistently more robust than their standard counterparts, but that large-sized KANs are, by and large, more robust. This comprehensive evaluation of KANs in adversarial scenarios offers the first in-depth analysis of KAN security, laying the groundwork for future research in this emerging field.
Problem

Research questions and friction points this paper is trying to address.

Evaluates adversarial robustness of Kolmogorov-Arnold Networks (KANs).
Compares KANs' resilience to standard neural networks under attacks.
Focuses on image classification tasks using benchmark datasets.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates KAN robustness against adversarial attacks
Compares KANs with standard neural networks
Focuses on image classification tasks
🔎 Similar Papers
No similar papers found.
T
Tal Alter
Dept. of Computer Science, Ben-Gurion University, Beer-Sheva, 8410501, Israel
R
Raz Lapid
Dept. of Computer Science, Ben-Gurion University, Beer-Sheva, 8410501, Israel; DeepKeep, Tel-Aviv, Israel
Moshe Sipper
Moshe Sipper
Ben-Gurion University
evolutionary machine learningevolutionary deep learningbio-inspired computingcellular computing