π€ AI Summary
To address the excessive dependence of deep learning models on labeled data, computational resources, and storage overhead in automatic modulation classification (AMC), this paper proposes an Adaptive Lightweight Wavelet Neural Network (ALWNN) and a few-shot prototype learning framework (MALWNN) tailored for low-resource, few-shot scenarios. ALWNN innovatively integrates adaptive wavelet transform with depthwise separable convolution to achieve compact model architecture and enhanced feature representation. MALWNN further incorporates prototypical networks to enable accurate AMC under extreme data scarcity. Evaluated on standard benchmarks, MALWNN achieves state-of-the-art accuracy under 1-shot and 5-shot settings, while reducing FLOPs and parameter count significantly compared to existing methods. Real-world deployment on USRP software-defined radios and Raspberry Pi embedded platforms confirms its efficiency, low latency, and practical viability for edge-based AMC applications.
π Abstract
In Automatic Modulation Classification (AMC), deep learning methods have shown remarkable performance, offering significant advantages over traditional approaches and demonstrating their vast potential. Nevertheless, notable drawbacks, particularly in their high demands for storage, computational resources, and large-scale labeled data, which limit their practical application in real-world scenarios. To tackle this issue, this paper innovatively proposes an automatic modulation classification model based on the Adaptive Lightweight Wavelet Neural Network (ALWNN) and the few-shot framework (MALWNN). The ALWNN model, by integrating the adaptive wavelet neural network and depth separable convolution, reduces the number of model parameters and computational complexity. The MALWNN framework, using ALWNN as an encoder and incorporating prototype network technology, decreases the model's dependence on the quantity of samples. Simulation results indicate that this model performs remarkably well on mainstream datasets. Moreover, in terms of Floating Point Operations Per Second (FLOPS) and Normalized Multiply - Accumulate Complexity (NMACC), ALWNN significantly reduces computational complexity compared to existing methods. This is further validated by real-world system tests on USRP and Raspberry Pi platforms. Experiments with MALWNN show its superior performance in few-shot learning scenarios compared to other algorithms.