๐ค AI Summary
In online learning settings, existing multi-model conformal prediction methods lack theoretical guarantees for model selection and aggregation. Method: This paper proposes the first online conformal model aggregation framework based on temporal weighted voting, integrating conformal prediction, online learning, and an empirical coverage-driven weight update mechanism to enable real-time, adaptive adjustment of model weights. The framework rigorously maintains $1-alpha$ marginal coverage while dynamically optimizing prediction set quality. Contribution/Results: Unlike conventional paradigms requiring a pre-specified single model, our approach supports seamless integration of heterogeneous model streams. Evaluated on multiple data stream benchmarks, it significantly reduces average prediction set widthโachieving both statistical reliability (via guaranteed coverage) and practical utility (via tighter, adaptive intervals).
๐ Abstract
Conformal prediction equips machine learning models with a reasonable notion of uncertainty quantification without making strong distributional assumptions. It wraps around any black-box prediction model and converts point predictions into set predictions that have a predefined marginal coverage guarantee. However, conformal prediction only works if we fix the underlying machine learning model in advance. A relatively unaddressed issue in conformal prediction is that of model selection and/or aggregation: for a given problem, which of the plethora of prediction methods (random forests, neural nets, regularized linear models, etc.) should we conformalize? This paper proposes a new approach towards conformal model aggregation in online settings that is based on combining the prediction sets from several algorithms by voting, where weights on the models are adapted over time based on past performance.