🤖 AI Summary
Edge AI applications demand ultra-low end-to-end latency (<20 ms), high throughput, and real-time processing—requirements unmet by current 5G networks. Empirical measurements across central European 5G deployments reveal end-to-end latencies of 61–110 ms, exceeding the AI-critical threshold by 270%, exposing fundamental bottlenecks in semantic awareness, compute-network co-design, and dynamic resource orchestration.
Method: This work quantifies, for the first time, the generational latency gap between 5G and edge AI through large-scale field measurements. It proposes an AI-native 6G design paradigm grounded in empirical evidence and establishes a tri-dimensional co-design architecture integrating network, computing, and semantics.
Contribution/Results: The study delivers a comprehensive, implementation-oriented technology roadmap spanning air interface, core network, and edge-intelligent coordination. It provides both theoretical foundations and practical engineering pathways for evolving 6G infrastructure toward AI-native services, bridging the latency gap while enabling semantic-aware, real-time distributed intelligence.
📝 Abstract
The convergence of Artificial Intelligence (AI) and the Internet of Things has accelerated the development of distributed, network-sensitive applications, necessitating ultra-low latency, high throughput, and real-time processing capabilities. While 5G networks represent a significant technological milestone, their ability to support AI-driven edge applications remains constrained by performance gaps observed in real-world deployments. This paper addresses these limitations and highlights critical advancements needed to realize a robust and scalable 6G ecosystem optimized for AI applications. Furthermore, we conduct an empirical evaluation of 5G network infrastructure in central Europe, with latency measurements ranging from 61 ms to 110 ms across different close geographical areas. These values exceed the requirements of latency-critical AI applications by approximately 270%, revealing significant shortcomings in current deployments. Building on these findings, we propose a set of recommendations to bridge the gap between existing 5G performance and the requirements of next-generation AI applications.