🤖 AI Summary
In multivariate time series forecasting (MTSF), existing Transformer-based models achieve high accuracy but lack interpretability, limiting their deployment in high-stakes domains such as healthcare and finance. Method: We propose a causality-aware Distributed Lag Embedding (DLE) mechanism that explicitly models variable-level historical lag effects within the attention architecture—enabling both variable-wise and time-step-wise interpretability. DLE integrates differentiable lag-weight learning with multivariate temporal attribution analysis to support statistically validated causal explanations without compromising predictive performance. Contribution/Results: Evaluated on multiple real-world datasets, our method reduces prediction error by 12.7% on average over state-of-the-art attention-based models. It establishes a new paradigm for trustworthy MTSF that jointly ensures high accuracy and rigorous, interpretable causal reasoning.
📝 Abstract
. Most real-world variables are multivariate time series influenced by past values and explanatory factors. Consequently, predicting these time series data using artificial intelligence is ongoing. In particular, in fields such as healthcare and finance, where reliability is crucial, having understandable explanations for predictions is essential. However, achieving a balance between high prediction accuracy and intuitive explainability has proven challenging. Although attention-based models have limitations in representing the individual influences of each variable, these models can influence the temporal dependencies in time series prediction and the magnitude of the influence of individual variables. To address this issue, this study introduced DLFormer, an attention-based architecture integrated with distributed lag embedding, to temporally embed individual variables and capture their temporal influence. Through validation against various real-world datasets, DLFormer showcased superior performance improvements compared to existing attention-based high-performance models. Furthermore, comparing the relationships between variables enhanced the reliability of explainability.