🤖 AI Summary
This paper addresses the performance limitation of multivariate time series forecasting caused by entangled temporal dependencies. To tackle this, we propose a linear modeling framework operating in an orthogonal transformation domain. Our key contributions are: (1) OrthoTrans, a data-adaptive orthogonal transformation that decouples inter-variable dependencies via orthogonal diagonalization of the temporal Pearson correlation matrix; (2) NormLin, a lightweight normalized linear layer replacing multi-head self-attention—reducing computational cost by ~50% while improving accuracy; and (3) plug-and-play compatibility, enabling seamless integration to enhance existing forecasters. Evaluated on 24 benchmarks across 140 forecasting tasks, our method achieves state-of-the-art performance, notably boosting the accuracy of Transformer-based models. The source code and datasets are publicly available.
📝 Abstract
This paper presents $mathbf{OLinear}$, a $mathbf{linear}$-based multivariate time series forecasting model that operates in an $mathbf{o}$rthogonally transformed domain. Recent forecasting models typically adopt the temporal forecast (TF) paradigm, which directly encode and decode time series in the time domain. However, the entangled step-wise dependencies in series data can hinder the performance of TF. To address this, some forecasters conduct encoding and decoding in the transformed domain using fixed, dataset-independent bases (e.g., sine and cosine signals in the Fourier transform). In contrast, we utilize $mathbf{OrthoTrans}$, a data-adaptive transformation based on an orthogonal matrix that diagonalizes the series' temporal Pearson correlation matrix. This approach enables more effective encoding and decoding in the decorrelated feature domain and can serve as a plug-in module to enhance existing forecasters. To enhance the representation learning for multivariate time series, we introduce a customized linear layer, $mathbf{NormLin}$, which employs a normalized weight matrix to capture multivariate dependencies. Empirically, the NormLin module shows a surprising performance advantage over multi-head self-attention, while requiring nearly half the FLOPs. Extensive experiments on 24 benchmarks and 140 forecasting tasks demonstrate that OLinear consistently achieves state-of-the-art performance with high efficiency. Notably, as a plug-in replacement for self-attention, the NormLin module consistently enhances Transformer-based forecasters. The code and datasets are available at https://anonymous.4open.science/r/OLinear