DLFormer: Enhancing Explainability in Multivariate Time Series Forecasting using Distributed Lag Embedding

📅 2024-08-29
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multivariate time series forecasting (MTSF), existing Transformer-based models achieve high accuracy but lack interpretability, limiting their deployment in high-stakes domains such as healthcare and finance. Method: We propose a causality-aware Distributed Lag Embedding (DLE) mechanism that explicitly models variable-level historical lag effects within the attention architecture—enabling both variable-wise and time-step-wise interpretability. DLE integrates differentiable lag-weight learning with multivariate temporal attribution analysis to support statistically validated causal explanations without compromising predictive performance. Contribution/Results: Evaluated on multiple real-world datasets, our method reduces prediction error by 12.7% on average over state-of-the-art attention-based models. It establishes a new paradigm for trustworthy MTSF that jointly ensures high accuracy and rigorous, interpretable causal reasoning.

Technology Category

Application Category

📝 Abstract
. Most real-world variables are multivariate time series influenced by past values and explanatory factors. Consequently, predicting these time series data using artificial intelligence is ongoing. In particular, in fields such as healthcare and finance, where reliability is crucial, having understandable explanations for predictions is essential. However, achieving a balance between high prediction accuracy and intuitive explainability has proven challenging. Although attention-based models have limitations in representing the individual influences of each variable, these models can influence the temporal dependencies in time series prediction and the magnitude of the influence of individual variables. To address this issue, this study introduced DLFormer, an attention-based architecture integrated with distributed lag embedding, to temporally embed individual variables and capture their temporal influence. Through validation against various real-world datasets, DLFormer showcased superior performance improvements compared to existing attention-based high-performance models. Furthermore, comparing the relationships between variables enhanced the reliability of explainability.
Problem

Research questions and friction points this paper is trying to address.

Enhance explainability in multivariate time series forecasting
Model local and global temporal dependencies effectively
Improve accuracy and interpretability in big data analytics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributed Lag Transformer for explainable MTSF
Time-Variable-Aware Learning captures temporal dependencies
State of the art accuracy with interpretable insights
🔎 Similar Papers
No similar papers found.
Y
Younghwi Kim
Safe & Clean Supply Chain Research Center, Pusan National University, 30-Jan-jeon Dong, Geum -Jeong Gu, 46241, Busan, South Korea
Dohee Kim
Dohee Kim
Pusan National University
Time-series AnalysisAIDeep LearningOptimization
Sunghyun Sim
Sunghyun Sim
Changwon National University
Industrial EngineeringData ScienceIndustrial AIEfficient AI