🤖 AI Summary
To address the challenges of integrating and monetizing Generative Artificial Intelligence (GAI) in open 6G networks, this paper proposes the first telecom-grade GAI marketplace platform tailored for Open RAN. The API-centric platform enables Mobile Network Operators (MNOs) to deploy, orchestrate, and monetize diverse GAI services natively within their networks—achieving tight GAI–6G integration, cross-domain interoperability, and customer-driven commercialization. Key technical contributions include: (i) a network-intrinsic AI deployment framework validated on an Open RAN testbed; (ii) localized large language model (LLM) inference optimization; and (iii) an end-to-end low-latency token scheduling mechanism. Experimental results demonstrate that on-device LLM deployment reduces generation latency significantly compared to cloud-based general-purpose models. The platform further validates high compatibility, scalability, and feasibility of novel GAI-enabled revenue models, establishing a practical, deployable paradigm for AI-native 6G intelligent networks.
📝 Abstract
Generative artificial intelligence (GAI) has emerged as a pivotal technology for content generation, reasoning, and decision-making, making it a promising solution on the 6G stage characterized by openness, connected intelligence, and service democratization. This article explores strategies for integrating and monetizing GAI within future open 6G networks, mainly from the perspectives of mobile network operators (MNOs). We propose a novel API-centric telecoms GAI marketplace platform, designed to serve as a central hub for deploying, managing, and monetizing diverse GAI services directly within the network. This platform underpins a flexible and interoperable ecosystem, enhances service delivery, and facilitates seamless integration of GAI capabilities across various network segments, thereby enabling new revenue streams through customer-centric generative services. Results from experimental evaluation in an end-to-end Open RAN testbed, show the latency benefits of this platform for local large language model (LLM) deployment, by comparing token timing for various generated lengths with cloud-based general-purpose LLMs. Lastly, the article discusses key considerations for implementing the GAI marketplace within 6G networks, including monetization strategy, regulatory, management, and service platform aspects.