Light-ResKAN: A Parameter-Sharing Lightweight KAN with Gram Polynomials for Efficient SAR Image Recognition

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of balancing accuracy and efficiency in synthetic aperture radar (SAR) image recognition on edge devices by proposing Light-ResKAN, a lightweight model that innovatively integrates Kolmogorov–Arnold Network (KAN) convolutions, learnable Gram polynomial activation functions, and channel-wise parameter sharing. This architecture substantially reduces computational complexity while preserving strong representational capacity for the nonlinear characteristics of SAR imagery. Evaluated on the MSTAR, FUSAR-Ship, and SAR-ACD datasets, Light-ResKAN achieves classification accuracies of 99.09%, 93.01%, and 97.26%, respectively. Compared to VGG16, it reduces floating-point operations (FLOPs) by 82.90× and model parameters by 163.78× when processing 1024×1024 MSTAR images, demonstrating its suitability for resource-constrained deployment.
📝 Abstract
Synthetic Aperture Radar (SAR) image recognition is vital for disaster monitoring, military reconnaissance, and ocean observation. However, large SAR image sizes hinder deep learning deployment on resource-constrained edge devices, and existing lightweight models struggle to balance high-precision feature extraction with low computational requirements. The emerging Kolmogorov-Arnold Network (KAN) enhances fitting by replacing fixed activations with learnable ones, reducing parameters and computation. Inspired by KAN, we propose Light-ResKAN to achieve a better balance between precision and efficiency. First, Light-ResKAN modifies ResNet by replacing convolutions with KAN convolutions, enabling adaptive feature extraction for SAR images. Second, we use Gram Polynomials as activations, which are well-suited for SAR data to capture complex non-linear relationships. Third, we employ a parameter-sharing strategy: each kernel shares parameters per channel, preserving unique features while reducing parameters and FLOPs. Our model achieves 99.09%, 93.01%, and 97.26% accuracy on MSTAR, FUSAR-Ship, and SAR-ACD datasets, respectively. Experiments on MSTAR resized to $1024 \times 1024$ show that compared to VGG16, our model reduces FLOPs by $82.90 \times$ and parameters by $163.78 \times$. This work establishes an efficient solution for edge SAR image recognition.
Problem

Research questions and friction points this paper is trying to address.

SAR image recognition
lightweight model
edge computing
computational efficiency
high-precision feature extraction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Kolmogorov-Arnold Network
Gram Polynomials
parameter sharing
lightweight architecture
SAR image recognition
🔎 Similar Papers
No similar papers found.
P
Pan Yi
College of Electronic Science and Technology, National University of Defense Technology, Changsha, 410073, China
Weijie Li
Weijie Li
National University of Defense Technology
Synthetic Aperture RadarAutomatic Target RecognitionFoundation ModelSelf­Supervised Learning
X
Xiaodong Chen
College of Electronic Science and Technology, National University of Defense Technology, Changsha, 410073, China
Jiehua Zhang
Jiehua Zhang
University of Oulu
Deep learningObject detectionModel quantization
L
Li Liu
College of Electronic Science and Technology, National University of Defense Technology, Changsha, 410073, China
Yongxiang Liu
Yongxiang Liu
Professor, National University of Defense Technology
Remote SensingSynthetic Aperture RadarRadarImage ProcessingPattern Recognition