Data-Driven Cellular Mobility Management via Bayesian Optimization and Reinforcement Learning

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Cellular networks face challenges in handover management due to user mobility heterogeneity and base station densification, rendering conventional A3-offset- and TTT-based mechanisms inadequate for jointly mitigating radio link failures (RLFs) and ping-pong handovers. This paper proposes two data-driven approaches: (1) high-dimensional Bayesian optimization (HD-BO) for dynamic tuning of critical handover parameters, and (2) deep reinforcement learning (DRL) for direct serving-cell selection. We introduce the first synergistic HD-BO–DRL framework, incorporating transfer learning to enable cross-speed generalization and, for the first time, validating its effectiveness in aerial scenarios such as unmanned aerial vehicle (UAV) communications. Evaluated via Sionna-based ray-tracing simulations, both methods significantly outperform 3GPP-standard handover procedures in reducing RLF rates and ping-pong handover counts. HD-BO achieves superior sample efficiency, while DRL reduces training time by 2.5× and delivers optimal overall performance.

Technology Category

Application Category

📝 Abstract
Mobility management in cellular networks faces increasing complexity due to network densification and heterogeneous user mobility characteristics. Traditional handover (HO) mechanisms, which rely on predefined parameters such as A3-offset and time-to-trigger (TTT), often fail to optimize mobility performance across varying speeds and deployment conditions. Fixed A3-offset and TTT configurations either delay HOs, increasing radio link failures (RLFs), or accelerate them, leading to excessive ping-pong effects. To address these challenges, we propose two data-driven mobility management approaches leveraging high-dimensional Bayesian optimization (HD-BO) and deep reinforcement learning (DRL). HD-BO optimizes HO parameters such as A3-offset and TTT, striking a desired trade-off between ping-pongs vs. RLF. DRL provides a non-parameter-based approach, allowing an agent to select serving cells based on real-time network conditions. We validate our approach using a real-world cellular deployment scenario, and employing Sionna ray tracing for site-specific channel propagation modeling. Results show that both HD-BO and DRL outperform 3GPP set-1 (TTT of 480 ms and A3-offset of 3 dB) and set-5 (TTT of 40 ms and A3-offset of -1 dB) benchmarks. We augment HD-BO with transfer learning so it can generalize across a range of user speeds. Applying the same transfer-learning strategy to the DRL method reduces its training time by a factor of 2.5 while preserving optimal HO performance, showing that it adapts efficiently to the mobility of aerial users such as UAVs. Simulations further reveal that HD-BO remains more sample-efficient than DRL, making it more suitable for scenarios with limited training data.
Problem

Research questions and friction points this paper is trying to address.

Optimizing handover parameters to reduce radio link failures and ping-pong effects
Addressing mobility management complexity in dense, heterogeneous cellular networks
Enhancing adaptability to varying user speeds and deployment conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

High-dimensional Bayesian optimization for HO parameters
Deep reinforcement learning for real-time cell selection
Transfer learning to generalize across user speeds
🔎 Similar Papers
No similar papers found.
M
Mohamed Benzaghta
Universitat Pompeu Fabra, Spain
S
Sahar Ammar
King Abdullah University of Science and Technology, Saudi Arabia
D
David L'opez-P'erez
Universitat Politècnica de València, Spain
Basem Shihada
Basem Shihada
Computer Science, King Abdullah University of Science and Technology
Data NetworksNetwork SystemsWireless Systems
Giovanni Geraci
Giovanni Geraci
Nokia | Universitat Pompeu Fabra
AI/ML6GWi-FiWireless Communications