Ski Rental with Distributional Predictions of Unknown Quality

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work studies the online ski rental problem with distributional predictions of unknown quality. The prediction is modeled as a probability distribution over the number of ski days, and its error is measured by the Wasserstein-1 distance. We present the first algorithm that achieves a balance between consistency and robustness without prior knowledge of the prediction error. When the prediction is accurate, the algorithm attains $O(\sqrt{b})$ consistency; in the worst case, it guarantees $O(b \log b)$ robustness. Moreover, its expected cost never exceeds the optimal offline cost plus an additive error of $O(\min\{\max(\eta,1)\cdot\sqrt{b},\, b \log b\})$, where $\eta$ denotes the Wasserstein-1 error. Information-theoretic lower bounds demonstrate that these results are nearly tight.

Technology Category

Application Category

📝 Abstract
We revisit the central online problem of ski rental in the"algorithms with predictions"framework from the point of view of distributional predictions. Ski rental was one of the first problems to be studied with predictions, where a natural prediction is simply the number of ski days. But it is both more natural and potentially more powerful to think of a prediction as a distribution p-hat over the ski days. If the true number of ski days is drawn from some true (but unknown) distribution p, then we show as our main result that there is an algorithm with expected cost at most OPT + O(min(max({eta}, 1) * sqrt(b), b log b)), where OPT is the expected cost of the optimal policy for the true distribution p, b is the cost of buying, and {eta} is the Earth Mover's (Wasserstein-1) distance between p and p-hat. Note that when {eta}<o(sqrt(b)) this gives additive loss less than b (the trivial bound), and when {eta} is arbitrarily large (corresponding to an extremely inaccurate prediction) we still do not pay more than O(b log b) additive loss. An implication of these bounds is that our algorithm has consistency O(sqrt(b)) (additive loss when the prediction error is 0) and robustness O(b log b) (additive loss when the prediction error is arbitrarily large). Moreover, we do not need to assume that we know (or have any bound on) the prediction error {eta}, in contrast with previous work in robust optimization which assumes that we know this error. We complement this upper bound with a variety of lower bounds showing that it is essentially tight: not only can the consistency/robustness tradeoff not be improved, but our particular loss function cannot be meaningfully improved.
Problem

Research questions and friction points this paper is trying to address.

ski rental
distributional predictions
online algorithms
prediction error
robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

distributional predictions
ski rental problem
Wasserstein distance
consistency-robustness tradeoff
online algorithms with predictions
🔎 Similar Papers
No similar papers found.