๐ค AI Summary
To address the fragmentation between encoding, decoding, and training in time series forecasting, this paper proposes a unified encoder-decoder framework capable of handling generalized prediction tasksโincluding extrapolation, interpolation, and missing value imputation. The method introduces three key innovations: (1) a time-aware latent bottleneck encoder that jointly models cross-channel and long-range temporal dependencies; (2) a decoder based on learnable timestamp queries, enabling flexible adaptation to arbitrary input and target time positions; and (3) a multi-task unified training strategy coupled with generic temporal positional embeddings. Evaluated across multiple benchmark datasets, the framework achieves significant improvements over state-of-the-art methods, demonstrating superior predictive accuracy, strong generalization across diverse forecasting tasks, and seamless compatibility with heterogeneous time series structures.
๐ Abstract
In machine learning, effective modeling requires a holistic consideration of how to encode inputs, make predictions (i.e., decoding), and train the model. However, in time-series forecasting, prior work has predominantly focused on encoder design, often treating prediction and training as separate or secondary concerns. In this paper, we propose TimePerceiver, a unified encoder-decoder forecasting framework that is tightly aligned with an effective training strategy. To be specific, we first generalize the forecasting task to include diverse temporal prediction objectives such as extrapolation, interpolation, and imputation. Since this generalization requires handling input and target segments that are arbitrarily positioned along the temporal axis, we design a novel encoder-decoder architecture that can flexibly perceive and adapt to these varying positions. For encoding, we introduce a set of latent bottleneck representations that can interact with all input segments to jointly capture temporal and cross-channel dependencies. For decoding, we leverage learnable queries corresponding to target timestamps to effectively retrieve relevant information. Extensive experiments demonstrate that our framework consistently and significantly outperforms prior state-of-the-art baselines across a wide range of benchmark datasets. The code is available at https://github.com/efficient-learning-lab/TimePerceiver.