๐ค AI Summary
To address the challenge of modeling and predicting end-to-end latency distributions under time-varying wireless links in 5G ultra-reliable low-latency communication (URLLC) scenarios, this paper proposes, for the first time, a full-probability-distribution-aware multi-step latency prediction framework. Methodologically, it innovatively integrates Transformer or LSTM architectures with a Mixture Density Network (MDN) based on Gaussian mixture models (GMM) to dynamically capture non-stationary latency distributions. Empirical validation is conducted on the OpenAirInterface 5G software-defined radio (SDR) platform, including precise time-synchronized latency data acquisition and preprocessing. Experimental results demonstrate that the proposed model outperforms both standard LSTM and feed-forward baselines in terms of negative log-likelihood and mean absolute error. Notably, it significantly improves latency guarantee accuracy at the 99.9% reliability targetโenabling adaptive scheduling, intelligent resource allocation, and QoS-driven evolution toward 6G.
๐ Abstract
With the emergence of new application areas such as cyber-physical systems and human-in-the-loop applications ensuring a specific level of end-to-end network latency with high reliability (e.g., 99.9%) is becoming increasingly critical. To align wireless links with these reliability requirements, it is essential to analyze and control network latency in terms of its full probability distribution. However, in a wireless link, the distribution may vary over time, making this task particularly challenging. We propose predicting the latency distribution using state-of-the-art data-driven techniques that leverage historical network information. Our approach tokenizes network state information and processes it using temporal deep-learning architectures-namely LSTM and Transformer models-to capture both short- and long-term delay dependencies. These models output parameters for a chosen parametric density via a mixture density network with Gaussian mixtures, yielding multi-step probabilistic forecasts of future delays. To validate our proposed approach, we implemented and tested these methods using a time-synchronized, SDR-based OpenAirInterface 5G testbed to collect and preprocess network-delay data. Our experiments show that the Transformer model achieves lower negative log-likelihood and mean absolute error than both LSTM and feed-forward baselines in challenging scenarios, while also providing insights into model complexity and training/inference overhead. This framework enables more informed decision-making for adaptive scheduling and resource allocation, paving the way toward enhanced QoS in evolving 5G and 6G networks.