ALWNN Empowered Automatic Modulation Classification: Conquering Complexity and Scarce Sample Conditions

πŸ“… 2025-03-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the excessive dependence of deep learning models on labeled data, computational resources, and storage overhead in automatic modulation classification (AMC), this paper proposes an Adaptive Lightweight Wavelet Neural Network (ALWNN) and a few-shot prototype learning framework (MALWNN) tailored for low-resource, few-shot scenarios. ALWNN innovatively integrates adaptive wavelet transform with depthwise separable convolution to achieve compact model architecture and enhanced feature representation. MALWNN further incorporates prototypical networks to enable accurate AMC under extreme data scarcity. Evaluated on standard benchmarks, MALWNN achieves state-of-the-art accuracy under 1-shot and 5-shot settings, while reducing FLOPs and parameter count significantly compared to existing methods. Real-world deployment on USRP software-defined radios and Raspberry Pi embedded platforms confirms its efficiency, low latency, and practical viability for edge-based AMC applications.

Technology Category

Application Category

πŸ“ Abstract
In Automatic Modulation Classification (AMC), deep learning methods have shown remarkable performance, offering significant advantages over traditional approaches and demonstrating their vast potential. Nevertheless, notable drawbacks, particularly in their high demands for storage, computational resources, and large-scale labeled data, which limit their practical application in real-world scenarios. To tackle this issue, this paper innovatively proposes an automatic modulation classification model based on the Adaptive Lightweight Wavelet Neural Network (ALWNN) and the few-shot framework (MALWNN). The ALWNN model, by integrating the adaptive wavelet neural network and depth separable convolution, reduces the number of model parameters and computational complexity. The MALWNN framework, using ALWNN as an encoder and incorporating prototype network technology, decreases the model's dependence on the quantity of samples. Simulation results indicate that this model performs remarkably well on mainstream datasets. Moreover, in terms of Floating Point Operations Per Second (FLOPS) and Normalized Multiply - Accumulate Complexity (NMACC), ALWNN significantly reduces computational complexity compared to existing methods. This is further validated by real-world system tests on USRP and Raspberry Pi platforms. Experiments with MALWNN show its superior performance in few-shot learning scenarios compared to other algorithms.
Problem

Research questions and friction points this paper is trying to address.

Reducing model parameters and computational complexity in AMC
Decreasing dependence on large-scale labeled data samples
Improving performance in few-shot learning scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Lightweight Wavelet Neural Network reduces complexity
Few-shot framework decreases dependence on sample quantity
Depth separable convolution lowers model parameters
πŸ”Ž Similar Papers
No similar papers found.