To Trust or Not to Trust: On Calibration in ML-based Resource Allocation for Wireless Networks

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the calibration of outage probability predictions for machine learning–driven resource allocation in next-generation wireless networks, focusing on the single-user, multi-resource scenario. We derive the theoretical lower bound on outage probability under perfect calibration and prove—novelly—that post-hoc calibration cannot reduce the minimum achievable outage probability. We further establish monotonicity conditions on the accuracy–confidence function to ensure calibration validity. Methodologically, we combine Platt scaling with isotonic regression for post-hoc calibration and jointly optimize the predictor using an outage-aware loss function tailored to system reliability requirements. Evaluation under Rayleigh fading and Clarke’s two-dimensional channel model demonstrates that, as the number of resources increases, the outage probability of perfectly calibrated models converges to the target threshold, thereby significantly enhancing the predictability and reliability of wireless systems.

Technology Category

Application Category

📝 Abstract
In next-generation communications and networks, machine learning (ML) models are expected to deliver not only accurate predictions but also well-calibrated confidence scores that reflect the true likelihood of correct decisions. This paper studies the calibration performance of an ML-based outage predictor within a single-user, multi-resource allocation framework. We first establish key theoretical properties of this system's outage probability (OP) under perfect calibration. Importantly, we show that as the number of resources grows, the OP of a perfectly calibrated predictor approaches the expected output conditioned on it being below the classification threshold. In contrast, when only one resource is available, the system's OP equals the model's overall expected output. We then derive the OP conditions for a perfectly calibrated predictor. These findings guide the choice of the classification threshold to achieve a desired OP, helping system designers meet specific reliability requirements. We also demonstrate that post-processing calibration cannot improve the system's minimum achievable OP, as it does not introduce new information about future channel states. Additionally, we show that well-calibrated models are part of a broader class of predictors that necessarily improve OP. In particular, we establish a monotonicity condition that the accuracy-confidence function must satisfy for such improvement to occur. To demonstrate these theoretical properties, we conduct a rigorous simulation-based analysis using post-processing calibration techniques: Platt scaling and isotonic regression. As part of this framework, the predictor is trained using an outage loss function specifically designed for this system. Furthermore, this analysis is performed on Rayleigh fading channels with temporal correlation captured by Clarke's 2D model, which accounts for receiver mobility.
Problem

Research questions and friction points this paper is trying to address.

Study calibration of ML-based outage predictors in wireless networks
Analyze outage probability under perfect calibration conditions
Evaluate impact of calibration on resource allocation reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

ML-based outage predictor for resource allocation
Post-processing calibration techniques applied
Outage loss function for predictor training
🔎 Similar Papers
No similar papers found.