Channel Prediction under Network Distribution Shift Using Continual Learning-based Loss Regularization

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Channel prediction performance for mobile users traversing heterogeneous wireless networks—characterized by varying antenna layouts, carrier frequencies, and scattering properties—degrades significantly due to distribution shift, causing a 37.5% increase in normalized mean square error (NMSE) during handover. Method: This paper proposes a continual learning-based loss regularization framework to mitigate catastrophic forgetting. It innovatively integrates Elastic Weight Consolidation (EWC) and Synaptic Intelligence (SI) for joint parameter importance estimation and regularization, enabling historical knowledge retention and adaptation to new network configurations under constant memory overhead. Results: Evaluated on 3GPP-standardized scenarios and multiple neural network architectures, the approach demonstrates substantial gains: under high SNR conditions, SI reduces NMSE by 1.8 dB (32%–34%), while EWC achieves a 1.4 dB reduction (17%–28%), both markedly outperforming conventional methods.

Technology Category

Application Category

📝 Abstract
Modern wireless networks face critical challenges when mobile users traverse heterogeneous network configurations with varying antenna layouts, carrier frequencies, and scattering statistics. Traditional predictors degrade under distribution shift, with NMSE rising by 37.5% during cross-configuration handovers. This work addresses catastrophic forgetting in channel prediction by proposing a continual learning framework based on loss regularization. The approach augments standard training objectives with penalty terms that selectively preserve network parameters essential for previous configurations while enabling adaptation to new environments. Two prominent regularization strategies are investigated: Elastic Weight Consolidation (EWC) and Synaptic Intelligence (SI). Across 3GPP scenarios and multiple architectures, SI lowers the high-SNR NMSE floor by up to 1.8 dB ($approx$32--34%), while EWC achieves up to 1.4 dB ($approx$17--28%). Notably, standard EWC incurs $mathcal{O}(MK)$ complexity (storing $M$ Fisher diagonal entries and corresponding parameter snapshots across $K$ tasks) unless consolidated, whereas SI maintains $mathcal{O}(M)$ memory complexity (storing $M$ model parameters), independent of task sequence length, making it suitable for resource-constrained wireless infrastructure
Problem

Research questions and friction points this paper is trying to address.

Predicting wireless channels under network distribution shifts
Addressing catastrophic forgetting in channel prediction models
Reducing performance degradation during cross-configuration handovers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continual learning framework for channel prediction
Loss regularization to prevent catastrophic forgetting
Synaptic Intelligence reduces memory complexity
🔎 Similar Papers
No similar papers found.