🤖 AI Summary
This work addresses the lack of instance-level uncertainty modeling in machine learning predictions for online algorithm design. Methodologically, it is the first to systematically integrate probabilistic calibration—such as Platt scaling and isotonic regression—as a foundation for uncertainty quantification into classical online problems, including ski-rental and online job scheduling. It introduces a calibration-driven competitive ratio analysis framework that yields prediction-confidence-dependent theoretical guarantees. Theoretically, it establishes a quantitative relationship between calibration quality and competitive ratio performance, proving superiority over conventional uncertainty estimation—particularly under high-variance prediction regimes. Empirically, the proposed algorithms significantly outperform baselines on real-world job scheduling datasets and achieve optimal prediction-dependent performance in the ski-rental problem. Crucially, the theoretical guarantees align closely with empirical results, demonstrating both rigor and practical efficacy.
📝 Abstract
The field of algorithms with predictions incorporates machine learning advice in the design of online algorithms to improve real-world performance. While this theoretical framework often assumes uniform reliability across all predictions, modern machine learning models can now provide instance-level uncertainty estimates. In this paper, we propose calibration as a principled and practical tool to bridge this gap, demonstrating the benefits of calibrated advice through two case studies: the ski rental and online job scheduling problems. For ski rental, we design an algorithm that achieves optimal prediction-dependent performance and prove that, in high-variance settings, calibrated advice offers more effective guidance than alternative methods for uncertainty quantification. For job scheduling, we demonstrate that using a calibrated predictor leads to significant performance improvements over existing methods. Evaluations on real-world data validate our theoretical findings, highlighting the practical impact of calibration for algorithms with predictions.