🤖 AI Summary
This work addresses the challenge of improving accuracy and calibrating uncertainty in deployed time series forecasting models without retraining or architectural modifications. We propose δ-Adapter, a lightweight, architecture-agnostic post-processing framework that formulates post-processing as a standalone optimization phase. By jointly leveraging soft editing of input covariates, residual correction of outputs, sparse time-aware feature masking, and quantile- and conformal-based calibrators, δ-Adapter enhances prediction accuracy, performs implicit feature selection, and calibrates uncertainty—all without altering the original model. Theoretical analysis establishes its stability, while extensive experiments demonstrate consistent improvements in both point forecast accuracy and prediction interval coverage across diverse backbone models and datasets. Notably, δ-Adapter incurs minimal computational overhead and seamlessly integrates with existing model interfaces.
📝 Abstract
Time series forecasting has long been dominated by advances in model architecture, with recent progress driven by deep learning and hybrid statistical techniques. However, as forecasting models approach diminishing returns in accuracy, a critical yet underexplored opportunity emerges: the strategic use of post-processing. In this paper, we address the last-mile gap in time-series forecasting, which is to improve accuracy and uncertainty without retraining or modifying a deployed backbone. We propose $\delta$-Adapter, a lightweight, architecture-agnostic way to boost deployed time series forecasters without retraining. $\delta$-Adapter learns tiny, bounded modules at two interfaces: input nudging (soft edits to covariates) and output residual correction. We provide local descent guarantees, $O(\delta)$ drift bounds, and compositional stability for combined adapters. Meanwhile, it can act as a feature selector by learning a sparse, horizon-aware mask over inputs to select important features, thereby improving interpretability. In addition, it can also be used as a distribution calibrator to measure uncertainty. Thus, we introduce a Quantile Calibrator and a Conformal Corrector that together deliver calibrated, personalized intervals with finite-sample coverage. Our experiments across diverse backbones and datasets show that $\delta$-Adapter improves accuracy and calibration with negligible compute and no interface changes.