OLinear: A Linear Model for Time Series Forecasting in Orthogonally Transformed Domain

📅 2025-05-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the performance limitation of multivariate time series forecasting caused by entangled temporal dependencies. To tackle this, we propose a linear modeling framework operating in an orthogonal transformation domain. Our key contributions are: (1) OrthoTrans, a data-adaptive orthogonal transformation that decouples inter-variable dependencies via orthogonal diagonalization of the temporal Pearson correlation matrix; (2) NormLin, a lightweight normalized linear layer replacing multi-head self-attention—reducing computational cost by ~50% while improving accuracy; and (3) plug-and-play compatibility, enabling seamless integration to enhance existing forecasters. Evaluated on 24 benchmarks across 140 forecasting tasks, our method achieves state-of-the-art performance, notably boosting the accuracy of Transformer-based models. The source code and datasets are publicly available.

Technology Category

Application Category

📝 Abstract
This paper presents $mathbf{OLinear}$, a $mathbf{linear}$-based multivariate time series forecasting model that operates in an $mathbf{o}$rthogonally transformed domain. Recent forecasting models typically adopt the temporal forecast (TF) paradigm, which directly encode and decode time series in the time domain. However, the entangled step-wise dependencies in series data can hinder the performance of TF. To address this, some forecasters conduct encoding and decoding in the transformed domain using fixed, dataset-independent bases (e.g., sine and cosine signals in the Fourier transform). In contrast, we utilize $mathbf{OrthoTrans}$, a data-adaptive transformation based on an orthogonal matrix that diagonalizes the series' temporal Pearson correlation matrix. This approach enables more effective encoding and decoding in the decorrelated feature domain and can serve as a plug-in module to enhance existing forecasters. To enhance the representation learning for multivariate time series, we introduce a customized linear layer, $mathbf{NormLin}$, which employs a normalized weight matrix to capture multivariate dependencies. Empirically, the NormLin module shows a surprising performance advantage over multi-head self-attention, while requiring nearly half the FLOPs. Extensive experiments on 24 benchmarks and 140 forecasting tasks demonstrate that OLinear consistently achieves state-of-the-art performance with high efficiency. Notably, as a plug-in replacement for self-attention, the NormLin module consistently enhances Transformer-based forecasters. The code and datasets are available at https://anonymous.4open.science/r/OLinear
Problem

Research questions and friction points this paper is trying to address.

Addresses entangled step-wise dependencies in time series forecasting
Proposes data-adaptive orthogonal transformation for decorrelated feature encoding
Introduces normalized linear layer to capture multivariate dependencies efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linear model in orthogonally transformed domain
Data-adaptive OrthoTrans for decorrelated features
NormLin layer for efficient multivariate dependencies
🔎 Similar Papers
No similar papers found.
Wenzhen Yue
Wenzhen Yue
Peking University
AIdata miningsignal processing
Y
Yong Liu
School of Software, BNRist, Tsinghua University, Beijing 100084
H
Haoxuan Li
Center for Data Science, Peking University, Beijing 100871
H
Hao Wang
Department of Control Science and Engineering, Zhejiang University, Hangzhou 310058
Xianghua Ying
Xianghua Ying
Peking University
computer visionpattern recognition
Ruohao Guo
Ruohao Guo
Peking University
Multi-Modal LearningComputer VisionVideo Generation
B
Bo Xing
State Key Laboratory of General AI, Peking University, Beijing 100871
Ji Shi
Ji Shi
Peking University
Computer Vision3D VisionNeural Rendering