On AI Verification in Open RAN

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AI-driven automation in Open RAN faces reliability challenges due to model opacity, particularly in critical functions like RAN slicing and scheduling; conventional explainable AI (XAI) techniques alone are insufficient for ensuring trustworthy operation. Method: We propose a lightweight, integrable runtime verification framework that—uniquely—employs decision trees (DTs) as interpretable, verifiable surrogates embedded within the Open RAN architecture to perform near-real-time consistency checks on deep reinforcement learning (DRL) agents’ slicing and scheduling decisions. Contribution/Results: Compared to traditional high-overhead verification approaches, our method significantly reduces computational and deployment costs while maintaining strong scalability and multi-vendor interoperability. Experimental evaluation in heterogeneous environments confirms its feasibility, achieving low-overhead, high-temporal-fidelity assurance of AI behavior. This work establishes a reusable, practical pathway toward trustworthy AI deployment in Open RAN.

Technology Category

Application Category

📝 Abstract
Open RAN introduces a flexible, cloud-based architecture for the Radio Access Network (RAN), enabling Artificial Intelligence (AI)/Machine Learning (ML)-driven automation across heterogeneous, multi-vendor deployments. While EXplainable Artificial Intelligence (XAI) helps mitigate the opacity of AI models, explainability alone does not guarantee reliable network operations. In this article, we propose a lightweight verification approach based on interpretable models to validate the behavior of Deep Reinforcement Learning (DRL) agents for RAN slicing and scheduling in Open RAN. Specifically, we use Decision Tree (DT)-based verifiers to perform near-real-time consistency checks at runtime, which would be otherwise unfeasible with computationally expensive state-of-the-art verifiers. We analyze the landscape of XAI and AI verification, propose a scalable architectural integration, and demonstrate feasibility with a DT-based slice-verifier. We also outline future challenges to ensure trustworthy AI adoption in Open RAN.
Problem

Research questions and friction points this paper is trying to address.

Verifying AI behavior in Open RAN networks
Ensuring reliable operations beyond explainability alone
Validating DRL agents for slicing and scheduling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight verification using interpretable models for DRL agents
Decision Tree-based verifiers for near-real-time consistency checks
Scalable architectural integration to ensure trustworthy AI adoption
🔎 Similar Papers
No similar papers found.