🤖 AI Summary
To address the exponential growth of CSI feedback overhead with antenna count in FDD massive MIMO systems, this paper proposes a Channel Prediction-driven Resource Scheduling (CPRS) mechanism for dynamic DM-RS allocation. CPRS jointly optimizes channel prediction and reference signal resource allocation—eliminating the need for real-time CSI feedback while enabling adaptive transmission compliant with 3GPP 5G-Advanced standards. Methodologically, time-varying CSI matrices are modeled as spatiotemporal image sequences, and a ViViT-CNN hybrid network is designed to extract and predict their joint spatial–temporal features efficiently. Evaluated via Sionna-based ray-tracing simulations, CPRS achieves up to a 36.60% throughput gain over baseline schemes, significantly enhancing feedback efficiency and spectral utilization under dynamic channel conditions.
📝 Abstract
Reducing feedback overhead in beyond 5G networks is a critical challenge, as the growing number of antennas in modern massive MIMO systems substantially increases the channel state information (CSI) feedback demand in frequency division duplex (FDD) systems. To address this, extensive research has focused on CSI compression and prediction, with neural network-based approaches gaining momentum and being considered for integration into the 3GPP 5G-Advanced standards. While deep learning has been effectively applied to CSI-limited beamforming and handover optimization, reference signal allocation under such constraints remains surprisingly underexplored. To fill this gap, we introduce the concept of channel prediction-based reference signal allocation (CPRS), which jointly optimizes channel prediction and DM-RS allocation to improve data throughput without requiring CSI feedback. We further propose a standards-compliant ViViT/CNN-based architecture that implements CPRS by treating evolving CSI matrices as sequential image-like data, enabling efficient and adaptive transmission in dynamic environments. Simulation results using ray-tracing channel data generated in NVIDIA Sionna validate the proposed method, showing up to 36.60% throughput improvement over benchmark strategies.