🤖 AI Summary
This work addresses the high overhead of frequent beam training in millimeter-wave (mmWave) communications under high-mobility scenarios, as well as the excessive computational and memory demands of existing deep learning approaches. To this end, the authors propose a lightweight beam prediction framework based on knowledge distillation that leverages Sub-6 GHz channel information to efficiently predict the optimal mmWave beam. A compact student network is innovatively designed by integrating both individual and relational knowledge distillation strategies, achieving significant model compression while preserving high performance. Experimental results demonstrate that the proposed student model retains only 1% of the trainable parameters and reduces computational complexity by 99% compared to the large teacher model, yet achieves comparable accuracy in beam prediction and spectral efficiency.
📝 Abstract
Beamforming in millimeter-wave (mmWave) high-mobility environments typically incurs substantial training overhead. While prior studies suggest that sub-6 GHz channels can be exploited to predict optimal mmWave beams, existing methods depend on large deep learning (DL) models with prohibitive computational and memory requirements. In this paper, we propose a computationally efficient framework for sub-6 GHz channel-mmWave beam mapping based on the knowledge distillation (KD) technique. We develop two compact student DL architectures based on individual and relational distillation strategies, which retain only a few hidden layers yet closely mimic the performance of large teacher DL models. Extensive simulations demonstrate that the proposed student models achieve the teacher's beam prediction accuracy and spectral efficiency while reducing trainable parameters and computational complexity by 99%.