🤖 AI Summary
Real-world time-series forecasting faces two key reliability challenges: in-distribution underfitting and out-of-distribution (OOD) input failure, compounded by the absence of label-free, dynamic uncertainty quantification. To address these, we propose a dual-rejection framework: (i) ambiguity rejection, triggered by high prediction-error variance to mitigate underfitting, and (ii) novelty rejection, based on VAE-encoded Mahalanobis distance to detect data drift. Together, these enable unsupervised, online confidence estimation and adaptive prediction termination—without requiring future ground-truth labels. Evaluated across diverse time-series benchmarks, our method significantly reduces erroneous predictions and demonstrates robustness to distributional shifts and strong adaptability to evolving data. It establishes a new, interpretable, and deployable paradigm for high-reliability time-series forecasting.
📝 Abstract
In real-world time series forecasting, uncertainty and lack of reliable evaluation pose significant challenges. Notably, forecasting errors often arise from underfitting in-distribution data and failing to handle out-of-distribution inputs. To enhance model reliability, we introduce a dual rejection mechanism combining ambiguity and novelty rejection. Ambiguity rejection, using prediction error variance, allows the model to abstain under low confidence, assessed through historical error variance analysis without future ground truth. Novelty rejection, employing Variational Autoencoders and Mahalanobis distance, detects deviations from training data. This dual approach improves forecasting reliability in dynamic environments by reducing errors and adapting to data changes, advancing reliability in complex scenarios.