🤖 AI Summary
Cellular networks face challenges in handover management due to user mobility heterogeneity and base station densification, rendering conventional A3-offset- and TTT-based mechanisms inadequate for jointly mitigating radio link failures (RLFs) and ping-pong handovers. This paper proposes two data-driven approaches: (1) high-dimensional Bayesian optimization (HD-BO) for dynamic tuning of critical handover parameters, and (2) deep reinforcement learning (DRL) for direct serving-cell selection. We introduce the first synergistic HD-BO–DRL framework, incorporating transfer learning to enable cross-speed generalization and, for the first time, validating its effectiveness in aerial scenarios such as unmanned aerial vehicle (UAV) communications. Evaluated via Sionna-based ray-tracing simulations, both methods significantly outperform 3GPP-standard handover procedures in reducing RLF rates and ping-pong handover counts. HD-BO achieves superior sample efficiency, while DRL reduces training time by 2.5× and delivers optimal overall performance.
📝 Abstract
Mobility management in cellular networks faces increasing complexity due to network densification and heterogeneous user mobility characteristics. Traditional handover (HO) mechanisms, which rely on predefined parameters such as A3-offset and time-to-trigger (TTT), often fail to optimize mobility performance across varying speeds and deployment conditions. Fixed A3-offset and TTT configurations either delay HOs, increasing radio link failures (RLFs), or accelerate them, leading to excessive ping-pong effects. To address these challenges, we propose two data-driven mobility management approaches leveraging high-dimensional Bayesian optimization (HD-BO) and deep reinforcement learning (DRL). HD-BO optimizes HO parameters such as A3-offset and TTT, striking a desired trade-off between ping-pongs vs. RLF. DRL provides a non-parameter-based approach, allowing an agent to select serving cells based on real-time network conditions. We validate our approach using a real-world cellular deployment scenario, and employing Sionna ray tracing for site-specific channel propagation modeling. Results show that both HD-BO and DRL outperform 3GPP set-1 (TTT of 480 ms and A3-offset of 3 dB) and set-5 (TTT of 40 ms and A3-offset of -1 dB) benchmarks. We augment HD-BO with transfer learning so it can generalize across a range of user speeds. Applying the same transfer-learning strategy to the DRL method reduces its training time by a factor of 2.5 while preserving optimal HO performance, showing that it adapts efficiently to the mobility of aerial users such as UAVs. Simulations further reveal that HD-BO remains more sample-efficient than DRL, making it more suitable for scenarios with limited training data.