TimeFormer: Transformer with Attention Modulation Empowered by Temporal Characteristics for Time Series Forecasting

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Transformer-based models for time series forecasting neglect intrinsic temporal properties—namely, unidirectional causality and the decaying influence of past observations over time. To address this, we propose TimeFormer, a novel architecture centered on a Modulated Self-Attention (MoSA) mechanism. MoSA explicitly incorporates temporal priors by integrating a Hawkes process to model decay dynamics, enforcing causal masking to preserve temporal directionality, and enabling multi-scale subsequence analysis to capture dynamic dependencies across granularities. This systematic infusion of temporal inductive bias significantly enhances temporal modeling capability within the Transformer framework. Extensive experiments on multiple real-world benchmarks demonstrate that TimeFormer achieves state-of-the-art performance, outperforming existing methods by up to 7.45% in MSE reduction and setting new records on 94.04% of evaluation metrics. Moreover, the MoSA module exhibits strong transferability, serving as a plug-and-play enhancement for diverse Transformer variants.

Technology Category

Application Category

📝 Abstract
Although Transformers excel in natural language processing, their extension to time series forecasting remains challenging due to insufficient consideration of the differences between textual and temporal modalities. In this paper, we develop a novel Transformer architecture designed for time series data, aiming to maximize its representational capacity. We identify two key but often overlooked characteristics of time series: (1) unidirectional influence from the past to the future, and (2) the phenomenon of decaying influence over time. These characteristics are introduced to enhance the attention mechanism of Transformers. We propose TimeFormer, whose core innovation is a self-attention mechanism with two modulation terms (MoSA), designed to capture these temporal priors of time series under the constraints of the Hawkes process and causal masking. Additionally, TimeFormer introduces a framework based on multi-scale and subsequence analysis to capture semantic dependencies at different temporal scales, enriching the temporal dependencies. Extensive experiments conducted on multiple real-world datasets show that TimeFormer significantly outperforms state-of-the-art methods, achieving up to a 7.45% reduction in MSE compared to the best baseline and setting new benchmarks on 94.04% of evaluation metrics. Moreover, we demonstrate that the MoSA mechanism can be broadly applied to enhance the performance of other Transformer-based models.
Problem

Research questions and friction points this paper is trying to address.

Addressing insufficient modality differences between text and time series in Transformers
Modeling unidirectional past-to-future influence and decaying temporal effects
Capturing multi-scale semantic dependencies while enriching temporal relationships
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modulates self-attention with temporal priors
Uses Hawkes process and causal masking
Captures multi-scale semantic dependencies
🔎 Similar Papers
No similar papers found.
Zhipeng Liu
Zhipeng Liu
Fidelity Technology at Fidelity Investments, Inc.
Auto MLTrustworthy AIIoTCybersecurityCloud Computing
Peibo Duan
Peibo Duan
Univsersity of Technology, Sydney
Intelligent transportation systemGraph neural networkReinforcement learning
X
Xuan Tang
Software College, Northeastern University, Shenyang, China
B
Baixin Li
Software College, Northeastern University, Shenyang, China
Y
Yongsheng Huang
Software College, Northeastern University, Shenyang, China
Mingyang Geng
Mingyang Geng
National University of Defense Technology
deep learningsoftware engineering
C
Changsheng Zhang
Software College, Northeastern University, Shenyang, China
B
Bin Zhang
Software College, Northeastern University, Shenyang, China
Binwu Wang
Binwu Wang
University of Science and Technology of China
Spatiotemporal dataGraph learningTraffic prediction