🤖 AI Summary
This work addresses a critical limitation of traditional conformal prediction, which only guarantees average error control over a fixed calibration set and fails to ensure validity at arbitrary time points as calibration data dynamically accumulate. To overcome this, the authors propose a dynamic conformal prediction framework grounded in quantile analysis, which provides high-probability risk control for prediction sets at any time during ongoing calibration—even under distributional shifts. The method establishes, for the first time, theoretical validity guarantees for all time points in a dynamic calibration setting, proves an asymptotically tight lower bound, and demonstrates robustness and practical utility in non-stationary environments through both simulation studies and real-world data experiments.
📝 Abstract
Prediction sets provide a means of quantifying the uncertainty in predictive tasks. Using held out calibration data, conformal prediction and risk control can produce prediction sets that exhibit statistically valid error control in a computationally efficient manner. However, in the standard formulations, the error is only controlled on average over many possible calibration datasets of fixed size. In this paper, we extend the control to remain valid with high probability over a cumulatively growing calibration dataset at any time point. We derive such guarantees using quantile-based arguments and illustrate the applicability of the proposed framework to settings involving distribution shift. We further establish a matching lower bound and show that our guarantees are asymptotically tight. Finally, we demonstrate the practical performance of our methods through both simulations and real-world numerical examples.