🤖 AI Summary
Existing Transformer-based models for time series forecasting neglect intrinsic temporal properties—namely, unidirectional causality and the decaying influence of past observations over time. To address this, we propose TimeFormer, a novel architecture centered on a Modulated Self-Attention (MoSA) mechanism. MoSA explicitly incorporates temporal priors by integrating a Hawkes process to model decay dynamics, enforcing causal masking to preserve temporal directionality, and enabling multi-scale subsequence analysis to capture dynamic dependencies across granularities. This systematic infusion of temporal inductive bias significantly enhances temporal modeling capability within the Transformer framework. Extensive experiments on multiple real-world benchmarks demonstrate that TimeFormer achieves state-of-the-art performance, outperforming existing methods by up to 7.45% in MSE reduction and setting new records on 94.04% of evaluation metrics. Moreover, the MoSA module exhibits strong transferability, serving as a plug-and-play enhancement for diverse Transformer variants.
📝 Abstract
Although Transformers excel in natural language processing, their extension to time series forecasting remains challenging due to insufficient consideration of the differences between textual and temporal modalities. In this paper, we develop a novel Transformer architecture designed for time series data, aiming to maximize its representational capacity. We identify two key but often overlooked characteristics of time series: (1) unidirectional influence from the past to the future, and (2) the phenomenon of decaying influence over time. These characteristics are introduced to enhance the attention mechanism of Transformers. We propose TimeFormer, whose core innovation is a self-attention mechanism with two modulation terms (MoSA), designed to capture these temporal priors of time series under the constraints of the Hawkes process and causal masking. Additionally, TimeFormer introduces a framework based on multi-scale and subsequence analysis to capture semantic dependencies at different temporal scales, enriching the temporal dependencies. Extensive experiments conducted on multiple real-world datasets show that TimeFormer significantly outperforms state-of-the-art methods, achieving up to a 7.45% reduction in MSE compared to the best baseline and setting new benchmarks on 94.04% of evaluation metrics. Moreover, we demonstrate that the MoSA mechanism can be broadly applied to enhance the performance of other Transformer-based models.