🤖 AI Summary
Beam tracking for mobile users in 5G/6G systems is challenging under large-scale codebooks, particularly in complex, time-varying propagation environments with reflections and blockages.
Method: This paper formulates beam selection as a codebook-based online search problem and establishes a Partially Observable Markov Decision Process (POMDP) framework. We propose a novel meta-reinforcement learning approach that takes belief states as input and integrates a multi-armed bandit mechanism for adaptive policy updates—requiring neither prior trajectory data nor explicit channel knowledge.
Contribution/Results: The method significantly enhances robustness and generalization against unknown user mobility and environmental dynamics. In a typical urban microcell scenario, it reduces beam lock-loss probability by two orders of magnitude and decreases beam switching latency by over 90%, outperforming all existing beam tracking schemes across key performance metrics.
📝 Abstract
Beamforming-capable antenna arrays with many elements enable higher data rates in next generation 5G and 6G networks. In current practice, analog beamforming uses a codebook of pre-configured beams with each of them radiating towards a specific direction, and a beam management function continuously selects extit{optimal} beams for moving user equipments (UEs). However, large codebooks and effects caused by reflections or blockages of beams make an optimal beam selection challenging. In contrast to previous work and standardization efforts that opt for supervised learning to train classifiers to predict the next best beam based on previously selected beams we formulate the problem as a partially observable Markov decision process (POMDP) and model the environment as the codebook itself. At each time step, we select a candidate beam conditioned on the belief state of the unobservable optimal beam and previously probed beams. This frames the beam selection problem as an online search procedure that locates the moving optimal beam. In contrast to previous work, our method handles new or unforeseen trajectories and changes in the physical environment, and outperforms previous work by orders of magnitude.