🤖 AI Summary
This work addresses the challenge of modeling complex dynamic relationships in high-dimensional vector autoregressive (VAR) models, where the number of parameters grows explosively. To overcome this, we propose a unified framework that integrates sparse Tucker decomposition with graph regularization. By stacking the transition matrices into a third-order tensor, our approach substantially reduces parameterization while enhancing local structural consistency among factor matrices through graph-based regularization. Our method is the first to simultaneously impose structural sparsity and graph-structured constraints in high-dimensional VAR modeling, yielding improved non-asymptotic error bounds. Theoretical analysis establishes global convergence of the proposed algorithm, and extensive experiments on both synthetic and real-world datasets demonstrate its superiority over existing methods in terms of estimation accuracy and forecasting performance.
📝 Abstract
Existing methods of vector autoregressive model for multivariate time series analysis make use of low-rank matrix approximation or Tucker decomposition to reduce the dimension of the over-parameterization issue. In this paper, we propose a sparse Tucker decomposition method with graph regularization for high-dimensional vector autoregressive time series. By stacking the time-series transition matrices into a third-order tensor, the sparse Tucker decomposition is employed to characterize important interactions within the transition third-order tensor and reduce the number of parameters. Moreover, the graph regularization is employed to measure the local consistency of the response, predictor and temporal factor matrices in the vector autoregressive model.The two proposed regularization techniques can be shown to more accurate parameters estimation. A non-asymptotic error bound of the estimator of the proposed method is established, which is lower than those of the existing matrix or tensor based methods. A proximal alternating linearized minimization algorithm is designed to solve the resulting model and its global convergence is established under very mild conditions. Extensive numerical experiments on synthetic data and real-world datasets are carried out to verify the superior performance of the proposed method over existing state-of-the-art methods.