🤖 AI Summary
This paper addresses the limitations of traditional forecasting methods—namely, their reliance on probabilistic modeling assumptions and hypothetical future distributions—by proposing a hypothesis-free, non-asymptotically optimal defensive prediction framework. Instead of assuming a data-generating mechanism, the method formulates forecasting as an adversarial sequential game and achieves robustness via online correction of past prediction errors. Its key contributions are threefold: (1) a unified defensive prediction paradigm grounded in Vovk’s game-theoretic framework and online convex optimization; (2) a hyperparameter-free recursive calibration algorithm, providing the first theoretical guarantees for online conformal prediction; and (3) attainment of the optimal $O(sqrt{T})$ regret bound, strong calibration, and exact calibration under arbitrary sequences. Experiments demonstrate its simplicity and near-optimal performance across online learning, expert aggregation, and conformal prediction tasks.
📝 Abstract
This tutorial provides a survey of algorithms for Defensive Forecasting, where predictions are derived not by prognostication but by correcting past mistakes. Pioneered by Vovk, Defensive Forecasting frames the goal of prediction as a sequential game, and derives predictions to minimize metrics no matter what outcomes occur. We present an elementary introduction to this general theory and derive simple, near-optimal algorithms for online learning, calibration, prediction with expert advice, and online conformal prediction.