🤖 AI Summary
To address the susceptibility of deep reinforcement learning (DRL)-driven xApps in O-RAN’s near-real-time RIC to local optima and insufficient robustness, this paper proposes a federated neural evolution (NE) enhancement framework. The framework deploys lightweight, non-intrusive NE modules in parallel—without disrupting RAN operations—to collaboratively optimize with existing DRL xApps: NE performs global exploration and population-level policy evolution, while DRL ensures online control stability. We introduce, for the first time, a distributed NE architecture integrated with federated learning, breaking away from conventional single-agent DRL training paradigms. Evaluation on the Open Air Interface Cellular (OAIC) platform demonstrates that the proposed method significantly improves xApp convergence stability and interference resilience, enhancing robustness by over 32% while incurring less than 8% additional computational overhead.
📝 Abstract
The open radio access network (O-RAN) architecture introduces RAN intelligent controllers (RICs) to facilitate the management and optimization of the disaggregated RAN. Reinforcement learning (RL) and its advanced form, deep RL (DRL), are increasingly employed for designing intelligent controllers, or xApps, to be deployed in the near-real time (near-RT) RIC. These models often encounter local optima, which raise concerns about their reliability for RAN intelligent control. We therefore introduce Federated O-RAN enabled Neuroevolution (NE)-enhanced DRL (F-ONRL) that deploys an NE-based optimizer xApp in parallel to the RAN controller xApps. This NE-DRL xApp framework enables effective exploration and exploitation in the near-RT RIC without disrupting RAN operations. We implement the NE xApp along with a DRL xApp and deploy them on Open AI Cellular (OAIC) platform and present numerical results that demonstrate the improved robustness of xApps while effectively balancing the additional computational load.