🤖 AI Summary
To address the high computational overhead and poor channel noise robustness of large-scale models (LSMs) in semantic communication (SC), this paper proposes a robust knowledge distillation–driven lightweight SC framework. Methodologically, it introduces: (1) KDL-DARTS, a novel algorithm jointly optimizing differentiable architecture search and knowledge distillation; (2) a two-stage robust knowledge transfer mechanism to enhance semantic fidelity of compact student models under noisy channels; and (3) a channel-aware Transformer-based encoder-decoder supporting multi-condition channel modeling and adaptive training. The framework achieves over 90% parameter reduction while retaining >98% classification accuracy of the teacher model on image tasks. It significantly outperforms state-of-the-art SC methods under AWGN and Rayleigh fading channels, demonstrating superior efficiency, robustness, and practicality.
📝 Abstract
Large-scale models (LSMs) can be an effective framework for semantic representation and understanding, thereby providing a suitable tool for designing semantic communication (SC) systems. However, their direct deployment is often hindered by high computational complexity and resource requirements. In this paper, a novel robust knowledge distillation based semantic communication (RKD-SC) framework is proposed to enable efficient and extcolor{black}{channel-noise-robust} LSM-powered SC. The framework addresses two key challenges: determining optimal compact model architectures and effectively transferring knowledge while maintaining robustness against channel noise. First, a knowledge distillation-based lightweight differentiable architecture search (KDL-DARTS) algorithm is proposed. This algorithm integrates knowledge distillation loss and a complexity penalty into the neural architecture search process to identify high-performance, lightweight semantic encoder architectures. Second, a novel two-stage robust knowledge distillation (RKD) algorithm is developed to transfer semantic capabilities from an LSM (teacher) to a compact encoder (student) and subsequently enhance system robustness. To further improve resilience to channel impairments, a channel-aware transformer (CAT) block is introduced as the channel codec, trained under diverse channel conditions with variable-length outputs. Extensive simulations on image classification tasks demonstrate that the RKD-SC framework significantly reduces model parameters while preserving a high degree of the teacher model's performance and exhibiting superior robustness compared to existing methods.