🤖 AI Summary
This work presents the first systematic evaluation of the adversarial robustness of Kolmogorov–Arnold Networks (KANs) for image classification. Addressing the lack of security assessment in prior KAN research, we conduct comprehensive adversarial evaluations on CIFAR-10, CIFAR-100, and an ImageNet subset, employing white-box attacks (PGD, FGSM) and black-box attacks (NES, Square) across small-, medium-, and large-scale fully connected and convolutional KAN architectures. Results demonstrate that large-scale KANs substantially outperform conventional neural networks—achieving up to a 12.3% average accuracy gain under diverse adversarial perturbations—whereas smaller and medium-sized KANs exhibit no consistent robustness advantage. This reveals a critical “scale effect” as the primary mechanism underlying enhanced KAN robustness. Our study establishes the first empirical benchmark and a structured security analysis framework for trustworthy KAN design, providing foundational insights into architectural determinants of adversarial resilience in KANs.
📝 Abstract
Kolmogorov-Arnold Networks (KANs) have recently emerged as a novel approach to function approximation, demonstrating remarkable potential in various domains. Despite their theoretical promise, the robustness of KANs under adversarial conditions has yet to be thoroughly examined. In this paper we explore the adversarial robustness of KANs, with a particular focus on image classification tasks. We assess the performance of KANs against standard white box and black-box adversarial attacks, comparing their resilience to that of established neural network architectures. Our experimental evaluation encompasses a variety of standard image classification benchmark datasets and investigates both fully connected and convolutional neural network architectures, of three sizes: small, medium, and large. We conclude that small- and medium-sized KANs (either fully connected or convolutional) are not consistently more robust than their standard counterparts, but that large-sized KANs are, by and large, more robust. This comprehensive evaluation of KANs in adversarial scenarios offers the first in-depth analysis of KAN security, laying the groundwork for future research in this emerging field.