MODE: Efficient Time Series Prediction with Mamba Enhanced by Low-Rank Neural ODEs

📅 2026-01-01
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of simultaneously achieving efficiency, scalability, and accuracy in time series forecasting, particularly when modeling long-range dependencies and irregularly sampled data. To this end, the authors propose MODE, a novel framework that uniquely integrates low-rank Neural Ordinary Differential Equations (Neural ODEs) with an enhanced Mamba architecture. MODE efficiently captures temporal dynamics through a linear tokenization layer, causal convolutions, SiLU activation functions, and a low-rank ODE module. It further introduces an innovative piecewise selective scanning mechanism to enhance focus on critical subsequences and improve scalability over long sequences. Extensive experiments demonstrate that MODE significantly outperforms existing methods across multiple benchmark datasets, achieving state-of-the-art performance in both prediction accuracy and computational efficiency.

Technology Category

Application Category

📝 Abstract
Time series prediction plays a pivotal role across diverse domains such as finance, healthcare, energy systems, and environmental modeling. However, existing approaches often struggle to balance efficiency, scalability, and accuracy, particularly when handling long-range dependencies and irregularly sampled data. To address these challenges, we propose MODE, a unified framework that integrates Low-Rank Neural Ordinary Differential Equations (Neural ODEs) with an Enhanced Mamba architecture. As illustrated in our framework, the input sequence is first transformed by a Linear Tokenization Layer and then processed through multiple Mamba Encoder blocks, each equipped with an Enhanced Mamba Layer that employs Causal Convolution, SiLU activation, and a Low-Rank Neural ODE enhancement to efficiently capture temporal dynamics. This low-rank formulation reduces computational overhead while maintaining expressive power. Furthermore, a segmented selective scanning mechanism, inspired by pseudo-ODE dynamics, adaptively focuses on salient subsequences to improve scalability and long-range sequence modeling. Extensive experiments on benchmark datasets demonstrate that MODE surpasses existing baselines in both predictive accuracy and computational efficiency. Overall, our contributions include: (1) a unified and efficient architecture for long-term time series modeling, (2) integration of Mamba's selective scanning with low-rank Neural ODEs for enhanced temporal representation, and (3) substantial improvements in efficiency and scalability enabled by low-rank approximation and dynamic selective scanning.
Problem

Research questions and friction points this paper is trying to address.

time series prediction
long-range dependencies
irregularly sampled data
computational efficiency
scalability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-Rank Neural ODEs
Mamba Architecture
Selective Scanning
Time Series Prediction
Efficient Modeling
🔎 Similar Papers
No similar papers found.
X
Xingsheng Chen
School of Computing and Data Science, The University of Hong Kong
R
Regina Zhang
Department of Computing and Data Science, Nanyang Technological University
B
Bo Gao
School of Information Engineering, Beijing Institute of Graphic Communication
X
Xingwei He
Department of Computing and Data Science, The University of Hong Kong
Xiaofeng Liu
Xiaofeng Liu
Assistant Professor, Yale University
Trustworthy AIComputer VisionMedical Image AnalysisData ScienceHealth Informatics
P
Pietro Lio
University of Cambridge
Kwok-Yan Lam
Kwok-Yan Lam
Nanyang Technological University
CybersecurityPrivacy-Preserving technologiesDigital TrustDistributing systemsLegalTech
S
S. Yiu
University of Hong Kong