🤖 AI Summary
This work investigates optimal error bounds for prediction with limited selectivity (PLS), where predictors may issue forecasts only on a specified subset of time-series instances. To formalize this constrained online prediction setting, we propose a PLS framework and introduce a novel instance-dependent complexity measure. Based on this measure, we derive tight upper and lower bounds on the optimal prediction error. Our theoretical analysis integrates instance-specific complexity characterization with average-case analysis, yielding the first exact error characterization within the PLS paradigm. Furthermore, we prove that these bounds are achievable with high probability on randomly generated instances, thereby validating both the effectiveness and tightness of the proposed complexity metric. Collectively, our results establish a new theoretical foundation for selective prediction under resource constraints, advancing the understanding of fundamental limits in selective forecasting.
📝 Abstract
Selective prediction [Dru13, QV19] models the scenario where a forecaster freely decides on the prediction window that their forecast spans. Many data statistics can be predicted to a non-trivial error rate without any distributional assumptions or expert advice, yet these results rely on that the forecaster may predict at any time. We introduce a model of Prediction with Limited Selectivity (PLS) where the forecaster can start the prediction only on a subset of the time horizon. We study the optimal prediction error both on an instance-by-instance basis and via an average-case analysis. We introduce a complexity measure that gives instance-dependent bounds on the optimal error. For a randomly-generated PLS instance, these bounds match with high probability.