🤖 AI Summary
This work addresses adaptive conformal prediction under non-exchangeable data, proposing a parameter-free, theoretically grounded online uncertainty calibration method. Unlike conventional online gradient-based approaches that rely on hand-tuned hyperparameters (e.g., learning rates), we introduce parameter-free online convex optimization—specifically, betting-based strategies—into the conformal inference framework for the first time. Our method integrates sequential quantile estimation with betting-style calibration to enable dynamic, real-time update of prediction sets. We establish a rigorous theoretical guarantee: the long-run miscoverage frequency converges almost surely to the pre-specified nominal level. Empirically, our approach achieves significantly improved coverage validity and predictive efficiency over pinball-loss-based baselines in time-series forecasting and distribution shift scenarios—without any hyperparameter tuning throughout deployment.
📝 Abstract
Conformal prediction is a valuable tool for quantifying predictive uncertainty of machine learning models. However, its applicability relies on the assumption of data exchangeability, a condition which is often not met in real-world scenarios. In this paper, we consider the problem of adaptive conformal inference without any assumptions about the data generating process. Existing approaches for adaptive conformal inference are based on optimizing the pinball loss using variants of online gradient descent. A notable shortcoming of such approaches is in their explicit dependence on and sensitivity to the choice of the learning rates. In this paper, we propose a different approach for adaptive conformal inference that leverages parameter-free online convex optimization techniques. We prove that our method controls long-term miscoverage frequency at a nominal level and demonstrate its convincing empirical performance without any need of performing cumbersome parameter tuning.