๐ค AI Summary
To address the high computational overhead, architectural redundancy, and training complexity of recurrent neural network (RNN) continual learning on edge devices, this paper proposes Minion GRU (MiRU), a lightweight GRU variant. MiRU eliminates conventional gating mechanisms and instead employs learnable scaling coefficients to dynamically update hidden statesโenabling gate-free, parameter-efficient, and computationally lightweight continual learning. It achieves stable multi-task continual learning using only replay-based experience rehearsal, without requiring auxiliary regularization or interference-mitigation mechanisms. Experimental results demonstrate that MiRU accelerates training by 2.90ร and reduces model parameters by 2.88ร compared to standard GRU, while matching its accuracy on image sequence classification and NLP benchmark tasks. These improvements significantly enhance the feasibility and practicality of continual learning on resource-constrained edge platforms.
๐ Abstract
The increasing demand for continual learning in sequential data processing has led to progressively complex training methodologies and larger recurrent network architectures. Consequently, this has widened the knowledge gap between continual learning with recurrent neural networks (RNNs) and their ability to operate on devices with limited memory and compute. To address this challenge, we investigate the effectiveness of simplifying RNN architectures, particularly gated recurrent unit (GRU), and its impact on both single-task and multitask sequential learning. We propose a new variant of GRU, namely the minion recurrent unit (MiRU). MiRU replaces conventional gating mechanisms with scaling coefficients to regulate dynamic updates of hidden states and historical context, reducing computational costs and memory requirements. Despite its simplified architecture, MiRU maintains performance comparable to the standard GRU while achieving 2.90x faster training and reducing parameter usage by 2.88x, as demonstrated through evaluations on sequential image classification and natural language processing benchmarks. The impact of model simplification on its learning capacity is also investigated by performing continual learning tasks with a rehearsal-based strategy and global inhibition. We find that MiRU demonstrates stable performance in multitask learning even when using only rehearsal, unlike the standard GRU and its variants. These features position MiRU as a promising candidate for edge-device applications.