LLM-Guided Open RAN: Empowering Hierarchical RAN Intelligent Control

📅 2025-04-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient coordination, semantic fragmentation, and policy misalignment between non-real-time (nRT) and near-real-time (near-RT) RAN Intelligent Controllers (RICs) in O-RAN, this paper proposes LLM-hRIC—a hierarchical RIC framework. It integrates a large language model (LLM) at the nRT layer for environment-aware strategic reasoning and couples a deep reinforcement learning (DRL) module at the near-RT layer for low-latency resource scheduling. This work pioneers a semantically aligned, dual-timescale architecture that unifies LLM-driven high-level intent understanding with DRL-enabled rapid execution—breaking from conventional isolated RIC decision-making. Evaluated on an integrated access and backhaul (IAB) network simulator, LLM-hRIC achieves a 23% reduction in average latency and an 18% gain in throughput over baseline methods, while significantly improving spectral efficiency and load balancing. Results validate both the effectiveness and scalability of hierarchical intelligent control in O-RAN.

Technology Category

Application Category

📝 Abstract
Recent advancements in large language models (LLMs) have led to a significant interest in deploying LLMempowered algorithms for wireless communication networks. Meanwhile, open radio access network (O-RAN) techniques offer unprecedented flexibility, with the non-real-time (non-RT) radio access network (RAN) intelligent controller (RIC) (non-RT RIC) and near-real-time (near-RT) RIC (near-RT RIC) components enabling intelligent resource management across different time scales. In this paper, we propose the LLM empowered hierarchical RIC (LLM-hRIC) framework to improve the collaboration between RICs. This framework integrates LLMs with reinforcement learning (RL) for efficient network resource management. In this framework, LLMs-empowered non-RT RICs provide strategic guidance and high-level policies based on environmental context. Concurrently, RL-empowered near-RT RICs perform low-latency tasks based on strategic guidance and local near-RT observation. We evaluate the LLM-hRIC framework in an integrated access and backhaul (IAB) network setting. Simulation results demonstrate that the proposed framework achieves superior performance. Finally, we discuss the key future challenges in applying LLMs to O-RAN.
Problem

Research questions and friction points this paper is trying to address.

Enhancing collaboration between RAN intelligent controllers (RICs)
Integrating LLMs with RL for network resource management
Improving performance in integrated access and backhaul networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-hRIC integrates LLMs with reinforcement learning
LLMs guide non-RT RICs for strategic policies
RL empowers near-RT RICs for low-latency tasks
🔎 Similar Papers
No similar papers found.
L
Lingyan Bao
School of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, South Korea
S
Sinwoong Yun
ICT Strategy Research Laboratory, Electronics and Telecommunications Research Institute (ETRI), Daejeon 34129, Republic of Korea
Jemin Lee
Jemin Lee
Associate Professor, Yonsei University
Wireless CommunicationsWireless SecurityIoT5G
Tony Q.S. Quek
Tony Q.S. Quek
Associate Provost (AI & Digital Innovation) and Chair Professor, SUTD
Wireless communicationsNetworkingOpen RANAI-RANNTN