IRNN: Innovation-driven Recurrent Neural Network for Time-Series Data Modeling and Prediction

📅 2025-05-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient dynamic modeling and prediction accuracy for time-series data, this paper proposes the Innovation-Driven Recurrent Neural Network (IRNN), the first RNN architecture to incorporate the “innovation” mechanism from Kalman filtering into hidden-state updates—leveraging historical prediction errors to dynamically refine internal representations. To enable effective training, we design Input-Update Backpropagation Through Time (IU-BPTT), an alternating optimization algorithm that jointly refines the innovation pathway and network parameters, thereby overcoming traditional BPTT’s reliance on static inputs. IRNN formulates a nonlinear state-space model that tightly integrates innovation signals with RNN hidden states. Extensive experiments on multiple real-world benchmark datasets demonstrate that IRNN achieves significant improvements in prediction accuracy while maintaining training overhead comparable to standard RNNs, validating its effectiveness and practicality.

Technology Category

Application Category

📝 Abstract
Many real-world datasets are time series that are sequentially collected and contain rich temporal information. Thus, a common interest in practice is to capture dynamics of time series and predict their future evolutions. To this end, the recurrent neural network (RNN) has been a prevalent and effective machine learning option, which admits a nonlinear state-space model representation. Motivated by the resemblance between RNN and Kalman filter (KF) for linear state-space models, we propose in this paper Innovation-driven RNN (IRNN), a novel RNN architecture tailored to time-series data modeling and prediction tasks. By adapting the concept of"innovation"from KF to RNN, past prediction errors are adopted as additional input signals to update hidden states of RNN and boost prediction performance. Since innovation data depend on network parameters, existing training algorithms for RNN do not apply to IRNN straightforwardly. Thus, a tailored training algorithm dubbed input updating-based back-propagation through time (IU-BPTT) is further proposed, which alternates between updating innovations and optimizing network parameters via gradient descent. Experiments on real-world benchmark datasets show that the integration of innovations into various forms of RNN leads to remarkably improved prediction accuracy of IRNN without increasing the training cost substantially.
Problem

Research questions and friction points this paper is trying to address.

Modeling and predicting time-series data dynamics
Enhancing RNN performance using innovation concept
Developing tailored training algorithm for IRNN
Innovation

Methods, ideas, or system contributions that make the work stand out.

Innovation-driven RNN adapts Kalman filter concepts
Uses past prediction errors to update hidden states
Proposes IU-BPTT for efficient training algorithm
🔎 Similar Papers
No similar papers found.
Y
Yifan Zhou
The Department of Automation, Tsinghua University, Beijing 100084, China
Y
Yibo Wang
The Department of Automation, Tsinghua University, Beijing 100084, China
Chao Shang
Chao Shang
Amazon AWS AI
Graph Neural NetworksKnowledge GraphNatural Language ProcessingMedical informatics