๐ค AI Summary
This work addresses the challenge of manually configuring cell (re)selection parameters in mobile networks, a practice that struggles to adapt to dynamic environments and consequently limits network performance. To overcome this limitation, the paper proposes CellPilot, a lightweight reinforcement learningโbased framework for adaptive parameter optimization. CellPilot is the first to apply reinforcement learning to the tuning of cell (re)selection parameters, enabling the system to learn spatiotemporal network dynamics and generalize across diverse scenarios. Experimental evaluation on real-world datasets demonstrates that CellPilot improves performance by up to 167% compared to conventional heuristic approaches, substantially enhancing both network efficiency and adaptability.
๐ Abstract
The widespread deployment of 5G networks, together with the coexistence of 4G/LTE networks, provides mobile devices a diverse set of candidate cells to connect to. However, associating mobile devices to cells to maximize overall network performance, a.k.a. cell (re)selection, remains a key challenge for mobile operators. Today, cell (re)selection parameters are typically configured manually based on operator experience and rarely adapted to dynamic network conditions. In this work, we ask: Can an agent automatically learn and adapt cell (re)selection parameters to consistently improve network performance? We present a reinforcement learning (RL)-based framework called CellPilot that adaptively tunes cell (re)selection parameters by learning spatiotemporal patterns of mobile network dynamics. Our study with real-world data demonstrates that even a lightweight RL agent can outperform conventional heuristic reconfigurations by up to 167%, while generalizing effectively across different network scenarios. These results indicate that data-driven approaches can significantly improve cell (re)selection configurations and enhance mobile network performance.