TimePerceiver: An Encoder-Decoder Framework for Generalized Time-Series Forecasting

๐Ÿ“… 2025-12-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the fragmentation between encoding, decoding, and training in time series forecasting, this paper proposes a unified encoder-decoder framework capable of handling generalized prediction tasksโ€”including extrapolation, interpolation, and missing value imputation. The method introduces three key innovations: (1) a time-aware latent bottleneck encoder that jointly models cross-channel and long-range temporal dependencies; (2) a decoder based on learnable timestamp queries, enabling flexible adaptation to arbitrary input and target time positions; and (3) a multi-task unified training strategy coupled with generic temporal positional embeddings. Evaluated across multiple benchmark datasets, the framework achieves significant improvements over state-of-the-art methods, demonstrating superior predictive accuracy, strong generalization across diverse forecasting tasks, and seamless compatibility with heterogeneous time series structures.

Technology Category

Application Category

๐Ÿ“ Abstract
In machine learning, effective modeling requires a holistic consideration of how to encode inputs, make predictions (i.e., decoding), and train the model. However, in time-series forecasting, prior work has predominantly focused on encoder design, often treating prediction and training as separate or secondary concerns. In this paper, we propose TimePerceiver, a unified encoder-decoder forecasting framework that is tightly aligned with an effective training strategy. To be specific, we first generalize the forecasting task to include diverse temporal prediction objectives such as extrapolation, interpolation, and imputation. Since this generalization requires handling input and target segments that are arbitrarily positioned along the temporal axis, we design a novel encoder-decoder architecture that can flexibly perceive and adapt to these varying positions. For encoding, we introduce a set of latent bottleneck representations that can interact with all input segments to jointly capture temporal and cross-channel dependencies. For decoding, we leverage learnable queries corresponding to target timestamps to effectively retrieve relevant information. Extensive experiments demonstrate that our framework consistently and significantly outperforms prior state-of-the-art baselines across a wide range of benchmark datasets. The code is available at https://github.com/efficient-learning-lab/TimePerceiver.
Problem

Research questions and friction points this paper is trying to address.

Generalizes forecasting to include extrapolation, interpolation, and imputation tasks
Handles arbitrarily positioned input and target segments along temporal axis
Unifies encoder-decoder design with effective training strategy for time-series
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generalized encoder-decoder for diverse temporal prediction tasks
Latent bottleneck representations capture temporal and cross-channel dependencies
Learnable queries retrieve information for target timestamps
๐Ÿ”Ž Similar Papers
No similar papers found.